google如何抓取页面[关闭]

I am just curious on how Google crawls a page, i have a bit of code to tell me if Google Bot is on my site and which pages it is on.

If Google is crawling a page for example, /page.html which has links in that page to say 10 other pages of the site.

Would it only add page.html for possible indexing since it is the page it is on or would it store all the links that are on page.html for possible indexing too?

If Google is crawling a page for example, /page.html which has links in that page to say 10 other pages of the site.

Would it only add page.html for possible indexing since it is the page it is on or would it store all the links that are on page.html for possible indexing too?

Links are precisely how Google's bots get around the internet to find content. Yes, they'll be queued for indexing, unless they're excluded in some way (robots.txt, NOINDEX meta tag, etc.)

Yes, Google's crawler - Googlebot - will store these links for possible indexing, unless restricted by the site's webmaster.

Googlebot's crawl process begins with a list of webpage URLs, generated from previous crawl processes and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links (SRC and HREF) on each page and adds them to its list of pages to crawl.