How to Learn SEO: SEO tutorial for Beginners in 2021 || Episode 5
Today is the fifth episode of the ‘Learn SEO’ series. How will engine bots discover your website? How do you know the number of indexed pages? What are the reasons why search engines can’t find a website? About the robot.txt file and its usage. SEO tutorial
Crawling: Can your pages be searched by search engines?
You already know that a prerequisite for your site to appear in SERP is that the site must be crawled and indexed by search engines. If you have a website of your own, check how many pages are indexed. You can see if Google can crawl and index your website.
One way to test a site’s index is to type “site: yourdomain.com” into the Google search bar and search for “site: yourdomain.com”.
See the picture below.
The figure above is a method of checking the number of indexed pages
Although the number that Google shows is not real or accurate, it tells you how many pages are being indexed and how it looks in SERP.
If your site is not found in search engines, there are several possible reasons.
If your site is completely new and still not crawling.
If your site has no backlinks.
The menu/navigation on your site is so cluttered that search engines can’t find bots to crawl.
Your site may contain code that prevents search engines from crawling.
Your site may be penalized by Google as a spam site.
The robots.txt file is located in the main directory of the website (such as your domain. mob robots.test) and indicates to search engines which parts of the site will be indexed and which will not. Also, instructions are given on the speed at which the site will be listed.
How does Google bot follow the robot?STT file?
If the Google bot does not find the robots.st file for a site, it starts crawling the site without any instructions.
If the Google bot finds the robots.text file for a site, it will follow the instructions and crawl the site.
If a site encounters an error while accessing the robots.text file and cannot fix it, it will not crawl the site.