Robots.txt files let web robots know what to do with a website’s pages. When a page is disallowed in robots.txt, that represents instructions telling the robots to skip over those web pages completely. I loved this blog. I used some of the sites from this list but I didn’t https://one-bookmark.com/story17601516/seo-google-business-cosas-que-debe-saber-antes-de-comprar