Websites have a ton of useful information—most times, it’s too laborious or complicated to visit every page on a website to copy product data, metadata, title tags, anchor text into a spreadsheet.
Here is where Screaming Frog comes to the rescue with custom data extractions to automate the process. Custom extractions is a form of web scraping, web harvesting, or web data extraction used to scrape and extract data from websites, giving the ability to store it locally on your computer.
For beginners, some questions you might have:
What is the Screaming Frog SEO Spider?
The Screaming Frog SEO Spider software is a website crawler that improves onsite SEO by extracting and analyzing your website’s data using a graphical user interface (GUI).
What are custom extractions?
Custom extractions are a set of functions in Screaming Frogs SEO spider to extract explicit information from webpages. These extractions help optimize your site for Technical SEO; which includes search results, gather essential data on your copy, and help locate and fix errors.
How is Data Extraction done?
The process of data extraction involves pulling the required data on your website using a Screaming Frog web spider. The information is saved within Screaming Frog’s memory, giving you the option to export your scanned results to Excel or Google Sheets for further review.
Why is Data Extraction critical?
Data extraction allows you to harvest large amounts of data quickly and efficiently. This automation gives you immediate results of web architecture. This process saves you time and resources while giving you the valuable data you’ll need to plan and strategize search engine optimization strategies.
Screaming Frog is the go to Web Scraper Tool for SEOs. The options are endless, here a ton of custom web-scraping syntaxes.
How to use Custom Extraction settings in Screaming Frog
In ScreamingFrog, go to Configuration > Custom > Extraction.
Next, you will need to +Add and set up your extraction rules.
Add a title, select if you need CSSPath, XPath, or Regex, then add your search function. If you aren’t sure which selector or function you need, look at the examples below or use the inspect element function in Google Chrome Dev Tools. You can open Dev Tools by using “right click” in the Google Chrome browser.
Here is an example of a how you would scrape for a Facebook Pixel ID
Results, as you can see, one of my pages is missing a Facebook Pixel:
Basic Syntax for using XPath Web Scraping
Search anywhere in the document
Search within the root
Select a specific attribute of an element
Wildcard, used to select any element
Find a specific element
Specifies the current element
Specifies the parent element
Extract all H1 tags
Extract the first H3 tag
Extract the second H3 tag
Extract text – any <p> contained within a <div>
Extract any <div> with class “author”
Extract any <p> with class “bio”
Extract any element with class “bio”
Extract the last <li> in a <ul>
Extract the first <li> in a <ol> with class “cat”
Count the number of H2’s (set extraction filter to “Function Value”)
Extract any link with anchor text containing “click here”
Extract any link with a title starting with “Written by”
How to Extract Common HTML Elements
Extract all links
Extract link that starts with “mailto” (email address)
Extract all image source URLs
Extract all image source URLs for images with the class name containing “aligncenter”
Extract elements with the rel attribute set to “alternate”
Isaac Adams-Hands is an SEO Director, Full Stack Developer, and InfoSec enthusiast. He received his Bachelor’s Degree from the University of Western Sydney before joining various marketing positions in search portals, higher education, and addiction recovery marketing agencies.