JS for SEO

js for seo

What is JavaScript SEO?

JavaScript SEO (Search Engine Optimization) is a type of Technical SEO (Search Engine Optimization) that aims to make JavaScript-heavy webpages easier to crawl and index, as well as search-friendly. The goal is for these websites to be discovered and ranked higher in search engines.

JavaScript is not what many SEOs are used to, and it has a learning curve. People tend to overuse it for things where there is most likely a better alternative, but sometimes you have to work with what you have. Just keep in mind that Javascript isn’t perfect and isn’t always the best tool for the job. Therefore cannot be parsed progressively, unlike HTML and CSS, thus it might be resource-intensive in terms of page load and performance. In many circumstances, performance may be sacrificed for functionality.

How Google processes pages with JavaScript

In the early days of search engines, a downloaded HTML response was sufficient to view the content of the majority of pages. Because of the growth of JavaScript, search engines now need to render many pages as if they were browsers to see material as a user sees it.

The technology at Google that handles the rendering process is known as the Web Rendering Service (WRS). 

Let’s say we begin the process at URL:

1. Crawler

GET requests are sent to the server by the crawler. The server responds with the file’s headers and contents, which gets saved.

Because Google is currently primarily focused on mobile-first indexing, the request is most likely coming from a mobile user agent. With the URL Inspection Tool in Search Console, you can examine how Google is scanning your site. Check the Coverage information for “Crawled as” when you run this for a URL, and it should tell you whether you’re still on desktop indexing or mobile-first indexing.

The requests are predominantly from Mountain View, California, USA, but they also crawl for locale-adaptive pages from other countries. This is because some websites will ban or handle visitors from a given country or IP address differently, potentially preventing Googlebot from seeing your material.

Some websites employ user-agent detection to display material to a specific crawler. Google may perceive something different than a user, especially on JavaScript pages. For debugging JavaScript SEO concerns, Google resources like the URL Inspection Tool in Google Search Console, the Mobile-Friendly Test, and the Rich Results Test are essential. They show you what Google sees and are useful for determining whether Google is blocked and whether they can read the page’s content. There are some major variations between the downloaded GET request, the rendered page, and even the testing tools, you will learn how to test this in the Renderer part.

2. Processing

Resources and Links

Google does not navigate from one page to the next in the same way that a human would. Checking the page for links to other pages and files required to construct the page is a part of Processing. These URLs are extracted and added to Google’s crawl queue, which is used to prioritise and schedule crawling.

From tags like  tags, Google will pull resource links (CSS, JS, and so on) needed to build a page. Links to other pages, on the other hand, must follow a specific format for Google to recognise them as links. The tag with a href attribute is required for both internal and external links. There are a variety of approaches to make this work for non-search-friendly JavaScript users.

Internal links added using JavaScript will not be picked up until after the page has been rendered. In most circumstances, this should be reasonably quick and not cause for alarm.

Duplicate elimination

Before the downloaded HTML is delivered to rendering, duplicate content can be removed or checked. The HTML response for app shell models may contain very little material and code. In reality, the same code may appear on every page of the site, and this code may appear on multiple websites. This can result in pages being treated as duplicates and not being rendered right away. It would be worse when search results may include the incorrect page or even the incorrect website. This should resolve with time, but it can be a pain, especially with newer websites.

Most Restrictive Directives

Google will select the most restrictive statements between HTML and the displayed version of a page. If a statement in JavaScript differs from a statement in HTML, Google will simply follow the most restricted statement. Noindex takes authority above index, and noindex in HTML prevents rendering altogether.

3. Render queue

Nowadays every page is sent to the renderer. Many SEOs are concerned that with JavaScript and two-stage indexing (HTML first, then rendered page), pages may not be rendered for days or even weeks. When Google looked into it, they discovered that sites were rendered in the median of 5 seconds, with the 90th percentile being minutes. In most circumstances, the time it takes to retrieve the HTML and render the pages should not be a worry.

4. Renderer

The renderer is where Google simulates a user’s experience by rendering a website. This is where they’ll handle the JavaScript and any changes made to the Document Object Model(DOM) by JavaScript.

Google is doing this via a headless Chrome browser that is now “evergreen,” meaning it should run the most recent Chrome version and support the most recent features. Many features were not available until recently since Google was rendering with Chrome 41.

Web-scale rendering may be the world’s eighth wonder. It’s a major undertaking that necessitates a significant investment of time and money. Because of the scale, Google can speed up the rendering process by using various shortcuts. There are SEO tools available that render web pages on a large scale and manage to render millions of pages per day to improve link index. It enables us to check for JavaScript redirects and to display links that have been inserted with JavaScript using a JS tag in the link reports:

Cached Resources

Google makes extensive use of cache resources. Everything is cached before being transmitted to the renderer: pages, files, API calls, and so on. They’re not downloading each resource for every page load; instead, they’re leveraging cached resources to speed things up.

This can result in situations where past file versions are used in the rendering process, and the indexed version of a page contains portions of older files. When major changes are made, you can use file versioning or content fingerprinting to establish new file names so that Google has to download the latest version of the resource for rendering.

No Fixed Timeout

The renderer only takes five seconds to load your page, according to a prevalent JS SEO assumption. While it’s always a good idea to make your site speedier, with the way Google caches data as explained above, this myth doesn’t make sense. They’re effectively loading a website that has previously been cached. The notion stems from testing tools such as the URL Inspection Tool, where resources are fetched in real-time and an appropriate limit must be established.

The renderer does not have a set timeout. They’re probably doing something similar to what Rendertron does in public. They’ll probably wait for something like networkidle0, which means there’s no more network activity, and then set a time limit in case something gets stuck or someone tries to mine bitcoin on their pages.

5. Crawl queue

Although Google has a page dedicated to crawling budgets, you should be aware that each site has its crawl budget and that each request must be prioritised. Google must also assess the crawling of your site against the crawling of all other websites on the internet. Sites that are newer or have a lot of dynamic pages will be crawled more slowly. Some pages will receive less frequent updates than others, and some resources may receive fewer requests.

Final thoughts

JavaScript is a tool that should be utilised with caution, not something that SEOs should be afraid of. Hopefully, this post has given you a better understanding of how to use it, but don’t be afraid to contact your developers and collaborate with them or ask them questions. They’ll be your best friends when it comes to optimising your JavaScript site for search engines.

Credits – https://ahrefs.com/blog/javascript-seo/

Quiz

/5
0 votes, 0 avg
0

JS for SEO

 Increase your knowledge

1 / 5

Why is it bad concept from SEO perspective to host cost-free write-ups as well as write ups that are very common on the web?

2 / 5

Which of the following internet site design guidelines has been recommended by Google?

3 / 5

Excellent quality web links to a site's homepage will help to increase the ranking capability of deeper web pages on the same domain.

4 / 5

Some words, when adhered to by a colon, have special definitions of yahoo. What is done by the link: Operator?

5 / 5

Which of the list below variables does Google think about while accessing whether or not a site is an authority internet site?

Please fill in a valid email address for receiving your Certificate
Thanks for attending the quiz

Your score is

0%