No Sitemap Found Maybe Its Being Generated Please Try Again Later
Index Coverage study
See which pages Google has constitute on your site, which pages take been indexed, and any indexing bug encountered.
INDEX COVERAGE REPORT
Index coverage status in Search Panel - Google Search Console Preparation
Getting started
Not-experts
If yous are new to indexing or SEO, or have a small-scale site, please read these guidelines.
- Determine whether you need to use this report. If your site has fewer than 500 pages, y'all probably don't need to use this report. Instead, utilise one of the post-obit Google searches to meet if your site is indexed:
-
site:<<site_root_domain_or_path>>- Come across a subset of pages that Google knows about on your site. Examples:site:example.comorsite:example.com/petstore -
site:<<your_site>> term1 term2- Search for indexed pages containing specific terms on your site. Example:site:example.com/petstore iguanas zebras. -
site:<<exact-url>>- Search for the exact URL of a page on your site to see whether Google has indexed it. Example:site:http://example.com/petstore/gerbil
-
- Read how Google Search works. If you don't understand indexing, this report volition misfile or frustrate yous--trust u.s.a..
- Use this report is used to understand the general alphabetize status of your site. The report is not useful for investigating the index status of specific pages. To discover the index status of a specific page, use the URL Inspection tool.
- What to look for in this written report:
- Are virtually of the URLs green (valid) and/or gray (excluded)? Your site should exist by and large valid and excluded pages: Valid, because these pages are in the index; Excluded, because Search Panel thinks those URLs are excluded from the index for a reason that you can agree with.
- Are few (if any) URLs red (mistake)? Fault URLs are nigh always a problem. Even so, how much fourth dimension you want to spend fixing indexing errors depends on how of import the page is to your site.
- Are the gray (excluded) URL reasons what you expect? Excluded URLs are not indexed, but we think that'southward probably non an error. Reasons for exclusion mean that the folio is explicitly blocked from indexing (for example, a robots.txt rule on your site, or a noindex tag on the page). Duplicate pages are also excluded (Google only indexes one version of a set of duplicate pages). Make sure that the reasons your pages are excluded are acceptable. If not, fix them co-ordinate to the documentation for the specific excluded status.
- Is Google indexing the near important URLs on your site? The Index coverage study isn't used to bank check individual URLs, but you can filter results to show just the valid URLs, and so see if your important URLs are listed. (Notation that the list of example URLs in the report is limited to 1,000 items, and isn't guaranteed to show all URLs in a given status, even when less than 1,000 items.) Cheque the index status of your homepage and primal pages using the URL Inspection tool.
- Is Google finding nearly of your URLs? The study shows all the URLs that Google knows almost on your site, whether or not they are indexed. If the full URL count in this written report is much smaller than your site's page count, then Google isn't finding pages on your site. Some possible reasons for this:
- The pages, or your site, is new. It can take a week or so for Google to kickoff crawling and indexing a new page or site. If your site or folio is new, await a few days for Google to notice and crawl it. In an urgent situation, or if waiting doesn't seem to exist working, yous tin can explicitly ask Google to crawl individual pages.
- The pages aren't findable past Google. The pages should be linked from somewhere known to Google: from other known pages: from your homepage, from other known pages on your site, from other sites, or from a sitemap. For a new website, the best first stride is to request indexing of your homepage, which should showtime Google itch your website. For missing parts of a site, make certain they are linked properly. If you are using a site hosting service such as Wix or SquareSpace, check your site host'due south documentation to learn how to publish your pages and make them findable by search engines.
- Read the documentation for your specific status blazon to understand the reason and any possible fix recommendations for a specific condition. Skipping the documentation volition crusade you more more effort and time in the long run than reading the docs.
- What not to look for:
- Don't expect every URL on your site to exist indexed. Some URLs might exist duplicates or might not contain meaningful information.
- Excluded URLs are usually fine. Read and understand the specific reason for each excluded URL to confirm that the page is properly excluded.
- Error URLs should probably be fixed, read the error reason to understand the result and how to prepare errors.
- The full coverage numbers to a higher place the chart are complete and accurate from Google's perspective, but don't expect them to lucifer exactly your estimate of the number of URLs on your site. Minor discrepancies tin can occur for diverse reasons.
- Simply because a page is indexed doesn't guarantee that it volition testify upwards in your search results. Search results are customized for each user's search history, location, and many other variables, so even if a page is indexed, it won't show upward in every search, or in the same ranking when it does. Therefore, if Search Console says a URL is indexed, but it doesn't turn up in your search results, yous tin can presume that it is indexed and eligible to appear in search results.
FAQs
What does this report show?
The Alphabetize coverage report shows whether specific URLs take been crawled and indexed by Google. (If you don't know have a good knowledge of what these terms mean, please read how Google Search works). Google finds URLs in many ways, and tries to crawl most of them. If a URL is missing or unavailable, Google will probably continue to try crawling that URL for a while.
A URL in this written report tin can have one of the following statuses:
- Valid: Google found and indexed the page. Nothing else to do.
- Warning: Google establish and probably indexed the page, but nosotros think there is a problem. Read the warning description beneath to sympathise your adjacent steps.
- Fault: The URL is not indexed, and we think it's because of a mistake that y'all tin can right. Read the error description below to understand your next steps.
- Excluded: The URL is not indexed, but that is probably the correct thing. Either you are blocking Google from itch and indexing the page, or the page has been classified as a duplicate of another, crawled page on your site.
What is indexing?
Indexing is when Google finds (crawls) your folio, then processes content of the page and puts the page into the Google index (indexes it), where the page is eligible to appear in Google Search results, likewise on as other Google services, like Observe. For more virtually indexing, read how Google Search works.
How do I go my page or site indexed?
If you are using a site hosting service such as Wix or SquareSpace, your hosting service will probably tell Googe whenever you publish or update a page. Check your site host'due south documentation to larn how to publish your pages and make them findable by search engines.
If you are creating a site or folio without a hosting service, you tin can utilize a sitemap or various other methods to tell Google about new sites or pages.
We strongly recommend ensuring that your homepage is indexed. Starting from your homepage, Google should exist able to index all the other pages on your site, if your site has comprehensive and properly implemented site navigation for visitors.
Is it OK if a page isn't indexed?
Absolutely. Google doesn't index pages that are blocked by a robots.txt rule or noindex tag, or pages that are duplicates of other pages on your site, or pages that are inappropriate to index them (for case, variations of a folio with unlike filters practical). Use the URL Inspection tool to see why a specific page isn't indexed. If there is an indexing error, or if a folio was excluded for a reason that doesn't brand sense, follow the documentation to understand and gear up the result.
SEOs, developers, and experienced website owners
If you're an experienced SEO, developer, or website possessor, but haven't used the Alphabetize coverage study even so:
- Read how Google Search works. If you don't understand indexing, this written report will just be confusing or frustrating, trust us.
- Follow the guidelines in Navigating the report, including What to wait for and What non to await for.
- Read the troubleshooting section to sympathise and fix common problems.
- Remember that Excluded is non necessarily a bad status for a URL. These URLs are excluded, and we think you lot probably intended for them to exist. In the example of a duplicate URL, understand why the URL is a duplicate, and why Google made the option it did. If you think the incorrect page was chosen as canonical, you can requite signals to Google about your preferred approved URL.
- Read the documentation for your specific status and reason to understand the issue, and see tips for fixing information technology.
Navigating the study
The Index Coverage report shows the Google indexing status of all URLs that Google knows about in your holding.
- The elevation-level summary folio shows the results for all URLs in your property grouped by status (error, alarm, or valid) and specific reason for that status (such as Submitted URL not establish (404)).
- Click a tabular array row in the summary page to see a details page that focuses on all URLs with the aforementioned status/reason.
Summary page
The tiptop level folio in the report shows the index status of all pages that Google has attempted to crawl on your site, grouped by status and reason.
What to look for
Ideally yous should come across a gradually increasing count of valid indexed pages every bit your site grows. If you run into drops or spikes, see the troubleshooting section. The status table in the summary page is grouped and sorted by "status + reason".
Your goal is to go the approved version of every important folio indexed. Any duplicate or alternate pages should be labeled "Excluded" in this report. Duplicate or alternating pages have substantially the same content as the canonical page. Having a page marked duplicate or alternate is normally a adept thing; it ways that we've plant the canonical folio and indexed it. You lot tin can observe the approved for any URL past running the URL Inspection tool. See more reasons why pages might be missing.
What non to look for
- 100% coverage: You should non expect all URLs on your site to be indexed, only the approved pages, as described above.
- Immediate indexing: When you add together new content, it can accept a few days for Google to index it. You can reduce the indexing lag by requesting indexing.
Chief crawler
The Principal crawler value on the summary page shows the default user agent type that Google uses to crawl your site. Available values are: Smartphone or Desktop; these crawlers simulate a visitor using a mobile device or a desktop computer, respectively.
Google crawls all pages on your site using this primary crawler type. Google may additionally clamber a subset of your pages using a secondary crawler (sometimes called alternating crawler), which is the other user agent type. For example, if the principal crawler for your site is Smartphone, the secondary crawler is Desktop; if the master crawler is Desktop, your secondary crawler is Smartphone. The purpose of a secondary crawl is to try to get more information about how your site behaves when visited past users on another device type.
Status
Each page tin accept 1 of the following condition values:
- Fault: The page is non indexed. See the specific error blazon description to learn more about the error and how to ready it. You lot should concentrate on these issues first.
- Alert: The page is indexed, merely has an consequence that you should be aware of.
- Excluded: The page is non indexed, but we recall that was your intention. (For case, you might accept deliberately excluded it past a noindex directive, or information technology might be a indistinguishable of a canonical page that we've already indexed on your site.)
- Valid : The folio is indexed.
Reason
Each condition (error, warning valid, excluded) has a specific reason for that condition. See Status type descriptions below for a description of each status blazon, and how to handle it.
Validation
The validation status for this issue. You should prioritize fixing issues that are in validation country "failed" or "non started".
About validation
After yous fix all instances of a specific effect on your site, you can ask Google to validate your changes. If all known instances are gone, the outcome is marked as fixed in the status table and dropped to the lesser of the table. Search Console tracks the validation state of the effect equally a whole, likewise as the land of each instance of the issue. When all instances of the issue are gone, the issue is considered fixed. (For actual states recorded, encounter Result validation land and Instance validation country.)
More about issue lifetime...
An issue's lifetime extends from the outset time any instance of that result was detected on your site until 90 days later on the last instance was marked every bit gone from your site. If ninety days pass without any recurrences, the event is removed from the report history.
The event's first detected appointment is the first time the issue was detected during the issue's lifetime, and does not change. Therefore:
- If all instances of an issue are fixed, but a new instance of the issue occurs 15 days later, the issue is marked every bit open, and "starting time detected" date remains the original engagement.
- If the same issue occurs 91 days after the last case was fixed, the previous effect was airtight, and and then this is recorded as a new issue, with the first detected date set up to "today".
Basic validation flow
Hither is an overview of the validation process after y'all clickValidate Fix for an issue. This process can take several days, and you volition receive progress notifications by email.
- When y'all click Validate Gear up, Search Panel immediately checks a few pages.
- If the current example exists in any of these pages, validation ends, and the validation state remains unchanged.
- If the sample pages do non have the current error, validation continues with stateStarted. If validation finds other unrelated bug, these issues are counted against that other issue type and validation continues.
- Search Console works through the listing of known URLs affected by this result. But URLs with known instances of this issue are queued for recrawling, non the whole site. Search Console keeps a record of all URLs checked in the validation history, which tin can be reached from the issue details folio.
- When a URL is checked:
- If the issue is not found, the instance validation state changes to Passing. If this is the first case checked after validation has started, the upshot validation state changes to Looking skilful.
- If the URL is no longer reachable, the case validation state changes toOther (which is not an error country).
- If the case is withal present, issue state changes to Failed and validation ends. If this is a new page discovered by normal itch, it is considered some other instance of this existing upshot.
- When all fault and warning URLs have been checked and the issue count is 0, the issue state changes toPassed. Important: Even when the number of afflicted pages drops to 0 and issue state changes to Passed, the original severity label will all the same be shown (Error or Warning).
Even if you never click "start validation" Google can detect fixed instances of an issue. If Google detects that all instances of an effect have been fixed during its regular clamber, it will change the event state to "Northward/A" on the study.
When is an issue considered "fixed" for a URL or particular?
An upshot is marked as stock-still for a URL or item when either of the post-obit weather condition are met:
- When the URL is crawled and the result is no longer found on the folio. For an AMP tag error, this tin can hateful that y'all either stock-still the tag or that the tag has been removed (if the tag is not required). During a validation attempt, it will be considered every bit "passed."
- If the page is not bachelor to Google for whatever reason (page removed, marked noindex, requires hallmark, so on), the outcome volition be considered every bit stock-still for that URL. During a validation try, it is counted in the "other" validation state.
Revalidation
When yous clickRevalidate for a failed validation, validation restarts for all failed instances, plus whatever new instances of this result discovered through normal crawling.
You should expect for a validation bicycle to consummate before requesting another cycle, even if you have fixed some issues during the current cycle.
Instances that have passed validation (marked Passed) or are no longer reachable (marked Other) are non checked again, and are removed from the history when you lot click Revalidate.
Validation history
You can come across the progress of a validation request past clicking the validation details link in the consequence details page.
Entries in the validation history page are grouped by URL for the AMP report and Index Status report. In the Mobile Usability and Rich Effect reports, items are grouped by the combination of URL + structured information item (as determined by the item'southward Proper name value). The validation state applies to the specific event that you are examining. Y'all tin have one issue labeled "Passed" on a page, simply other problems labeled "Failed", "Pending," or "Other".
Consequence validation state
The post-obit validation states apply to a given consequence:
- Not started: At that place are 1 or more pages with an instance of this event that you have never begun a validation attempt for. Next steps:
- Click into the result to learn the details of the fault. Inspect the individual pages to see examples of the fault on the live page using the AMP Test. (If the AMP Test does not show the error on the page, it is considering you fixed the error on the live folio after Google found the error and generated this issue report.)
- Click "Learn more" on the details page to see the details of the rule that was violated.
- Click an example URL row in the table to get details on that specific error.
- Fix your pages and and so click Validate fix to have Google recrawl your pages. Google volition notify you lot about the progress of the validation. Validation typically up to well-nigh 2 weeks, only in some cases tin take much longer, so please be patient.
- Started: You lot accept begun a validation attempt and no remaining instances of the issue take been found yet.Side by side step: Google volition send notifications as validation proceeds, telling you what to do, if necessary.
- Looking good: You started a validation attempt, and all effect instances that have been checked so far have been fixed.Next footstep: Zero to do, just Google will send notifications as validation proceeds, telling you what to exercise.
- Passed: All known instances of the effect are gone (or the affected URL is no longer available). You must have clicked "Validate fix" to get to this state (if instances disappeared without you requesting validation, state would change to North/A). Side by side step:Nothing more to exercise.
- N/A: Google found that the issue was stock-still on all URLs, fifty-fifty though you never started a validation attempt.Next step: Nothing more than to do.
- Failed: A sure threshold of pages still comprise this issue, later you clicked "Validate." Next steps: Fix the issue and revalidate.
Instance validation state
After validation has been requested, every instance of the issue is assigned one of the post-obit validation states:
- Pending validation: Queued for validation. The final fourth dimension Google looked, this issue instance existed.
- Passed: [Not available in all reports] Google checked for the effect instance and it no longer exists. Tin can reach this country only if you lot explicitly clickedValidate for this result example.
- Failed: Google checked for the upshot instance and it'southward still there. Can accomplish this state but if you explicitly clickedValidate for this issue instance.
- Other: [Not available in all reports] Google couldn't attain the URL hosting the instance, or (for structured data) couldn't find the particular on the page whatsoever more. Considered equivalent toPassed.
Note that the aforementioned URL tin can have different states for unlike issues; For instance, if a single page has both issue 10 and issue Y, issue X can be in validation state Passed and issue Y on the aforementioned folio tin exist in validation state Pending.
URL discovery dropdown filter
You can use the dropdown filter higher up the chart to filter index results by how Google discovered the URL. The following values are available:
- All known pages [Default] - Show all URLs discovered by Google through any means.
- All due south ubmitted pages - Evidence only pages submitted in a sitemap to this report or past sitemap ping.
- Specific sitemap URL - Evidence just URLs listed in a specific sitemap that was submitted using this study. This includes any URLs in nested sitemaps.
A URL is considered to submitted by a sitemap even if it was also discovered through another mechanism (for example, by organic crawling from another page).
Details page
Click on a row in the summary page to open up a details folio for that condition + reason combination. Y'all can come across details about the called outcome by clicking Learn more at the superlative of the page.
The graph on this folio shows the count of afflicted pages over time.
The table shows an example list of pages affected past this status + reason. Y'all can click the following row elements:
The Source value on the details page shows which user agent blazon (Smartphone or Desktop) was used to crawl the URLs listed.
When you've fixed all instances of an error or alert, click Validate Fix to let Google know that you've fixed the event.
See a URL marked with an consequence that yous've already fixed? Peradventure you fixed the issue AFTER the concluding Google crawl. Therefore, if you see a URL with an event that yous have fixed, be sure to bank check the crawl engagement for that URL. Check and ostend your fix, and so request re-indexing
Sharing the report
Y'all can share upshot details in the coverage or enhancement reports by clicking the Share
button on the folio. This link grants access just to the current outcome details page, plus whatever validation history pages for this result, to anyone with the link. It does non grant access to other pages for your resource, or enable the shared user to perform whatsoever actions on your property or business relationship. You tin revoke the link at any time by disabling sharing for this page.
Exporting study information
Many reports provide an consign push button
to consign the report data. Both nautical chart and tabular array data are exported. Values shown as either ~ or - in the report (not available/not a number) will exist zeros in the downloaded data.
Troubleshooting
You can ostend the indexing status for whatsoever URL shown in this study past inspecting the URL thusly:
- Determine whether the index condition really is a problem based on the status type, indexing goal, and specific fault.
- Read the specific information about the issue.
- Inspect the URL with the URL Inspection tool:
- Click the inspect icon
next to the URL in the examples tabular array to open URL Inspection for that URL. - See crawl and index details for that URL in the Coverage > Crawl and Coverage > Indexing sections of the URL Inspection report.
- To exam the live version of the folio, click Examination live URL.
- Click the inspect icon
Common issues
Here are some of the about common indexing problems that y'all might run across in this report:
Driblet in total indexed pages without corresponding errors
If you see a drib in total indexed pages without a corresponding increase in errors, you might be blocking access to your existing pages via robots.txt, 'noindex' or a required login. Look at the Excluded URLs for a spike that corresponds to your drop in Valid pages. Note that if these URLs were submitted in a sitemap, they would be marked as errors, not excluded.
More Excluded than Valid pages
If yous come across more Excluded than Valid pages, look at the exclusion reasons. Mutual exclusion reasons include:
- You accept a robots.txt rule that blocks Google from crawling large sections of your site. If you are blocking the incorrect pages, unblock them.
- Your site has a large number of duplicate pages, probably because it uses parameters to filter or sort a common collection (for example:
blazon=wearing apparelorcolor=greenishorsort=cost). These page probably should be excluded, if they are just showing the aforementioned content that is sorted, filtered, or reached in different ways.
Mistake spikes
Error spikes might be caused by a change in your template that introduces a new error, or you might accept submitted a sitemap that includes URLs that are blocked for crawling past robots.txt, noindex, or a login requirement.
If you see an mistake spike:
- See if you lot can find any correspondence between the full number of indexing errors or total indexed count and the sparkline
next to a specific mistake row on the summary folio equally a inkling to which issue might be affecting your total fault or total indexed folio count. - Click into the details pages for whatever errors that seem to be contributing to your mistake spike. Read the clarification almost the specific fault type to learn how to handle it best.
- Click into an issue, and inspect an example page to see what the mistake is, if necessary.
- Fix all instances for the fault, and request validation by clicking Validate Ready in the details page for that reason. Read more about validation.
- You lot'll go notifications as your validation proceeds, but you can cheque back later a few days to see whether your error count has gone downwards.
Server errors
A server error means that Googlebot couldn't access your URL, the asking timed out, or your site was busy. Every bit a event, Googlebot was forced to abandon the request.
Check the host status verdict for your site in the Crawl Stats written report to come across if Google is reporting site availability bug that you can confirm and fix.
Testing server connectivity
You can apply the the URL Inspection tool to see if you can reproduce a server error reported by the Index Coverage written report.
Fixing server connectivity errors
- Reduce excessive page loading for dynamic folio requests.
A site that delivers the aforementioned content for multiple URLs is considered to evangelize content dynamically (for example,www.example.com/shoes.php?color=red&size=viiserves the same content aswww.case.com/shoes.php?size=7&color=red). Dynamic pages can take too long to respond, resulting in timeout bug. Or the server might return an overloaded status to ask Googlebot to crawl the site more slowly. In full general, we recommend keeping parameter lists short and using them sparingly. If you lot're confident nearly how parameters work for your site, you can tell Google how we should handle these parameters. - Make sure your site'due south hosting server is not down, overloaded, or misconfigured.
If connexion, timeout or response issues persists, check with your web hoster and consider increasing your site'southward power to handle traffic. - Check that yous are not inadvertently blocking Google.
You lot might be blocking Google due to a organisation level effect, such as a DNS configuration issue, a misconfigured firewall or DoS protection organization, or a content management system configuration. Protection systems are an important function of good hosting and are frequently configured to automatically block unusually high levels of server requests. However, because Googlebot often makes more requests than a human user, it tin trigger these protection systems, causing them to cake Googlebot and prevent it from itch your website. To fix such issues, identify which part of your website'southward infrastructure is blocking Googlebot and remove the block. The firewall may not be nether your control, so yous may demand to hash out this with your hosting provider. - Command search engine site crawling and indexing wisely.
Some webmasters intentionally forbid Googlebot from reaching their websites, perhaps using a firewall as described above. In these cases, commonly the intent is not to entirely cake Googlebot, only to control how the site is crawled and indexed. If this applies to you, bank check the post-obit:- To control Googlebot's crawling of your content, use a robots.txt file and configure URL parameters.
- If you're worried almost rogue bots using the Googlebot user-agent, you tin verify whether a crawler is really Googlebot.
- If you would like to alter how frequently Googlebot crawls your site, you can asking a change in Googlebot'south crawl rate. Hosting providers can verify ownership of their IP addresses to enable this.
404 errors
In general, nosotros recommend fixing merely 404 error pages, not 404 excluded pages. 404 error pages are pages that you explicitly asked Google to index, but were not found, which is obviously a bug. 404 excluded pages are pages that Google discovered through some other mechanism, such as a link from some other page. If the page has been moved, you should return a 3XX redirect to the new page. Learn more about evaluating and fixing 404 errors.
Missing pages or sites
If your page is not in the report at all, i of the following is probably true:
- Google doesn't know about the page. Some notes about page discoverability:
- If this is a new site or page, remember that it can take some time for Google to find and crawl new sites or pages.
- In gild for Google to learn about a page, you must either submit a sitemap or page crawl request, or else Google must observe a link to your folio somewhere.
- Later on a page URL is known, it tin accept some time (upwards to a few weeks) earlier Google crawls some or all of your site.
- Indexing is never instant, even when you submit a crawl request directly.
- Google doesn't guarantee that all pages everywhere will brand it into the Google index.
- Google tin can't reach your page (it requires a login, or is otherwise not available to all users on the internet)
- The page has a noindex tag, which prevents Google from indexing it
- The page was dropped from the alphabetize for some reason.
To fix:
Apply the URL Inspection tool to test the problem on your folio. If the page is non in the Index Coverage written report but it is listed as indexed in the URL Inspection report, it was probably indexed recently, and volition appear in the Index Coverage written report soon. If the folio is listed as not indexed in the URL Inspection tool (which is what y'all'd await), test the live page. The live page test results should betoken what the issue is: use the information from the examination and the test documentation to learn how to set up the issue.
"Submitted" errors and exclusions
Any indexing reason that uses the give-and-take "Submitted" in the championship (for example, "Submitted URL returned 403") means that the URL is listed in a sitemap that is either referenced by your robots.txt file or submitted using the Sitemaps report.
To fix a "Submitted" result:
- Fix the consequence that prevents the page from being crawled
or - Remove the URL from your sitemap and resubmit the sitemap in the Sitemaps report (for fastest service)
or - Using the Sitemaps report, delete whatever sitemaps that contain the URL (and ensure that no sitemaps listed in your robots.txt file include this URL).
FAQs
Why is my page in the index? I don't want it indexed.
Google can index whatsoever URL that it finds unless y'all include a noindex directive on the folio (or it has been temporarily blocked), and Google can detect a page in many different ways, including someone linking to your folio from another site.
- If you want your page to be blocked from Google Search results, yous tin can either crave some kind of login for the folio, or yous can use a noindex directive on the folio.
- If you want your folio to be removed from Google Search results after it has already been found, yous'll need to follow these steps.
Why hasn't my site been reindexed lately?
Google reindexes pages based on a number of criteria, including how often information technology thinks the page changes. If your site doesn't alter often, information technology might be on a slower refresh charge per unit, which is fine, if your pages haven't changed. If you think your site is in demand of a refresh, enquire Google to recrawl it.
Can you please recrawl my page/site?
Inquire Google to recrawl it.
Why are and so many of my pages excluded?
Look at the exclusion reasons detailed by the Index Coverage report. Almost exclusions are due to one of the following reasons:
- You accept a robots.txt rule that is blocking u.s.a. from crawling big sections of your site. Use the URL Inspection tool to confirm the trouble.
- Your site has a big number of duplicate pages, typically considering information technology uses parameters to filter or sort a common collection (for instance:
type=apparelorcolour=greenorsort=cost). These pages volition be labeled as "indistinguishable" or "alternate" in the Index Coverage report. - The URL redirects to another URL. Redirect URLs are not indexed; the redirect target is.
Google tin can't admission my sitemap
Exist sure that your sitemap is not blocked by robots.txt, is valid, and that you're using the proper URL in your robots.txt entry or Sitemaps report submission. Test your sitemap URL using a publicly available sitemap testing tool.
Why does Google keep crawling a page that was removed?
Google continues to crawl all known URLs even after they return 4XX errors for a while, in instance it's a temporary mistake. The but case when a URL won't be crawled is when it returns a noindex directive.
To avert showing you an eternally growing list of 404 errors, the Index Coverage report shows simply URLs that have shown 404 errors in the past month.
I can see my page, why can't Google?
Use the URL Inspection tool to see whether Google can meet the live folio. If it can't, it should explain why. If information technology can, the problem is likely that the access mistake has been fixed since the last clamber. Run a live crawl using the URL Inspection tool and request indexing.
The URL Inspection tool shows no bug, but the Index Coverage report shows an error; why?
You might take fixed the fault afterward the URL was last crawled by Google. Look at the crawl date for your URL (which should be visible in either the URL details folio in the Alphabetize Coverage report or in the indexed version view in the URL Inspection tool). Determine if you made whatever fixes since the page was crawled.
How do I find the alphabetize state of a specific URL?
To larn the index status of a specific URL, employ the URL Inspection tool. You tin't search or filter by URL in the Index Coverage study.
Status reasons
The post-obit status types are exposed by the Index Coverage report:
Error
Pages with errors take non been indexed
Server fault (5xx): Your server returned a 500-level fault when the page was requested. Come across Fixing server errors.
Redirect error: Google experienced one of the following redirect errors:
- A redirect chain that was besides long
- A redirect loop
- A redirect URL that eventually exceeded the max URL length
- A bad or empty URL in the redirect chain
Use a web debugging tool, such equally Lighthouse, to get more details about the redirect.
Submitted URL blocked past robots.txt: You submitted this page for indexing, but the folio is blocked by your site's robots.txt file.
- Click the page in the Examples table to aggrandize the tools side panel.
- Click Test robots.txt blocking to run the robots.txt tester for that URL. The tool should highlight the rule that is blocking that URL.
- Update your robots.txt file to remove or alter the rule, every bit appropriate. You tin can observe the location of this file past clicking Come across live robots.txt on the robots.txt test tool. If you are using a web hosting service and practice not have permission to alter this file, search your service'southward documentation or contact their help heart to tell them almost the problem.
Submitted URL marked 'noindex': You submitted this page for indexing, but the folio has a 'noindex' directive either in a meta tag or HTTP header. If you want this page to exist indexed, you must remove the tag or HTTP header. Employ the URL Inspection tool to ostend the error:
- Click the inspection icon
adjacent to the URL in the tabular array. - Under Coverage > Indexing > Indexing allowed? the study should show that noindex is preventing indexing.
- Ostend that the noindex tag yet exists in the live version:
- Clicking Test alive URL
- Under Availability > Indexing > Indexing allowed? see if the noindex directive is still detected. If noindex is no longer present, you tin can click Request Indexing to ask Google to effort once more to alphabetize the page. If noindex is nevertheless present, you lot must remove it in order for the page to exist indexed.
Submitted URL seems to be a Soft 404: You submitted this page for indexing, just the server returned what seems to be a soft 404. Learn how to set up this.
Submitted URL returns unauthorized request (401): Yous submitted this page for indexing, but Google got a 401 (non authorized) response. Either remove authority requirements for this folio, or else let Googlebot to access your pages by verifying its identity. Yous can verify this error past visiting the page in incognito mode.
Submitted URL not found (404): You submitted a non-real URL for indexing. See Fixing 404 errors.
Submitted URL returned 403: The server recognized Googlebot equally logged in, but denied Google admission to the content. If this page should be indexed, please grant admission to anonymous visitors; otherwise, practise not submit this page for indexing.
Submitted URL blocked due to other 4xx issue: The server returned a 4xx response code not covered past any other issue type described here for the submitted URL. You should either set this error, or not submit this URL for indexing. Try debugging your page using the URL Inspection tool.
Alert
Pages with a warning status might require your attention, and may or may not have been indexed, according to the specific result.
Indexed, though blocked by robots.txt: The page was indexed, despite being blocked by your website's robots.txt file. Google always respects robots.txt, simply this doesn't necessarily prevent indexing if someone else links to your page. Google won't request and crawl the page, only we tin can still index information technology, using the data from the page that links to your blocked page. Because of the robots.txt rule, whatever snippet shown in Google Search results for the folio will probably exist very express.
Adjacent steps:
- If you do desire to block this page from Google Search, robots.txt is not the correct mechanism to avert being indexed. To avoid being indexed, remove the robots.txt cake and use 'noindex'.
- If you lot do not want to block this page, update your robots.txt file to unblock your page. You can utilise the robots.txt tester to determine which rule is blocking this page.
Page indexed without content: This page appears in the Google index, but for some reason Google could not read the content. Possible reasons are that the page might be cloaked to Google or the page might be in a format that Google tin can't alphabetize. This is not a example of robots.txt blocking. Inspect the page, and wait at theCoverage section for details.
Valid
Pages with a valid status accept been indexed.
Submitted and indexed: You submitted the URL for indexing, and information technology was indexed.
Indexed, not submitted in sitemap: The URL was discovered by Google and indexed. Nosotros recommend submitting all important URLs using a sitemap.
Excluded
These pages are typically non indexed, and we think that is advisable. These pages are either duplicates of indexed pages, or blocked from indexing by some machinery on your site, or otherwise not indexed for a reason that we recall is not an mistake.
Excluded past 'noindex' tag: When Google tried to index the page information technology encountered a 'noindex' directive and therefore did not index it. If you do not want this page indexed, congratulations! If you do want this page to be indexed, you should remove that 'noindex' directive. To confirm the presence of this tag or directive, request the page in a browser and search the response trunk and response headers for "noindex".
Blocked by page removal tool: The page is currently blocked by a URL removal request. If you are a verified site owner, you can use the URL removals tool to encounter who submitted a URL removal request. Removal requests are only good for almost 90 days after the removal date. After that period, Googlebot may become dorsum and index the page fifty-fifty if you do not submit another index asking. If you don't want the page indexed, use 'noindex', require authorization for the folio, or remove the page.
Blocked by robots.txt: This page was blocked to Googlebot with a robots.txt file. You tin verify this using the robots.txt tester. Note that this does not mean that the folio won't be indexed through some other means. If Google can find other data about this page without loading it, the page could still be indexed (though this is less common). To ensure that a page is non indexed by Google, remove the robots.txt block and utilise a 'noindex' directive.
Blocked due to unauthorized request (401): The page was blocked to Googlebot past a request for authorization (401 response). If yous practice desire Googlebot to be able to crawl this page, either remove authorization requirements, or allow Googlebot to admission your page.
Crawled - currently non indexed: The page was crawled by Google, but not indexed. It may or may non exist indexed in the future; no demand to resubmit this URL for itch.
Discovered - currently not indexed: The page was found by Google, but not crawled yet. Typically, Google wanted to crawl the URL but this was expected to overload the site; therefore Google rescheduled the clamber. This is why the final crawl date is empty on the written report.
Alternate folio with proper canonical tag: This page is a duplicate of a page that Google recognizes as canonical. This page correctly points to the canonical page, and so there is nothing for yous to do.
Indistinguishable without user-selected canonical: This page has duplicates, none of which is marked canonical. We think this folio is non the canonical one. You should explicitly marking the canonical for this page. Inspecting this URL should show the Google-selected canonical URL.
Duplicate, Google chose different canonical than user: This page is marked as canonical for a set of pages, simply Google thinks some other URL makes a better approved. Google has indexed the folio that we consider canonical rather than this one. Nosotros recommend that you lot explicitly marker this folio as a indistinguishable of the canonical URL. This page was discovered without an explicit crawl request. Inspecting this URL should show the Google-selected canonical URL.
Not found (404): This page returned a 404 error when requested. Google discovered this URL without any explicit request or sitemap. Google might have discovered the URL as a link from another site, or possibly the folio existed earlier and was deleted. Googlebot volition probably go along to effort this URL for some catamenia of time; there is no way to tell Googlebot to permanently forget a URL, although it will crawl it less and less often. 404 responses are not a problem, if intentional. If your page has moved, use a 301 redirect to the new location. ReadFixing 404 errors
Folio with redirect: The URL is a redirect, and therefore was not added to the index.
Soft 404: The page request returns what we think is a soft 404 response. This means that it returns a user-friendly "non found" message without a corresponding 404 response code. We recommend returning a 404 response code for truly "not institute" pages, or adding more data to the page to let us know that it is not a soft 404.Acquire more
Indistinguishable, submitted URL not selected every bit canonical: The URL is 1 of a set of duplicate URLs without an explicitly marked canonical page. You explicitly asked this URL to be indexed, but considering it is a duplicate, and Google thinks that another URL is a amend candidate for canonical, Google did not index this URL. Instead, nosotros indexed the canonical that we selected. (Google but indexes the canonical in a set of duplicates.) The divergence between this status and "Google chose different approved than user" is that here you lot have explicitly requested indexing. Inspecting this URL should show the Google-selected canonical URL.
Blocked due to access forbidden (403): The user agent provided credentials, but was non granted access. Withal, Googlebot never provides credentials, and so your server is returning this error incorrectly. This error should either be stock-still, or the folio should be blocked by robots.txt or noindex.
Blocked due to other 4xx issue: The server encountered a 4xx error not covered by any other issue type described here.
Was this helpful?
How can we improve it?
leadbetterslove1957.blogspot.com
Source: https://support.google.com/webmasters/answer/7440203?hl=en
0 Response to "No Sitemap Found Maybe Its Being Generated Please Try Again Later"
Post a Comment