Can Google read content that is hidden under a "Read More" area?
-
For example, when a person first lands on a given page, they see a collapsed paragraph but if they want to gather more information they press the "read more" and it expands to reveal the full paragraph. Does Google crawl the full paragraph or just the shortened version?
In the same vein, what if you have a text box that contains three different tabs. For example, you're selling a product that has a text box with overview, instructions & ingredients tabs all housed under the same URL. Does Google crawl all three tabs?
Thanks for your insight!
-
Yes, for the most part. Google wants to deliver the best results for visitors based on their search query. So if something is hidden from initial view this would impact ux and especially if it's poorly implemented (not intuitive). As you know, original and compelling copy is the best. Unfortunately in many situations, such as a large ecommerce site, it is resource intensive. It's best to avoid thin content. However, it does get ranked as you can grab a snippet and place in Google and look at the results. So yes, it's possible that Google will rank these pages with duplicate content in a hidden view.
I would advise you to tell your client to remove any hidden content and rewrite product descriptions. Depending on resources, they may/may not want to do this. If they don't, at least you made a recommendation. Good luck!
-
Ok, that makes sense. And can that be applied to a text box with tabs?
Follow up to that - the situation is that I have a client hat doesn't have a lot of "original" content on their e-commerce page. It sounds like Google will take into account that content as "original" content but won't necessarily used it to build relevancy for any keywords hidden within. Is that correct?
-
I agree with Kevin in the answer above, the content may be crawled (depending on how you have hidden the paragraph using HTML) but Google may not give the right advantage of the content available after clicking the link.
We have a client with FAQ section with similar situation https://www.fairsplit.com/faqs/ , the website gets authority for the Question Titles of the FAQ section and not for the content as answer available after clicking the question.
I hope this helps, let me know if you have further questions.
Regards,
Vijay
-
The Googlebot will crawl this information. However, Google may elect not to index it or discount this content in its rankings.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is my content being fully read by Google?
Hi mozzers, I wanted to ask you a quick question regarding Google's crawlability of webpages. We just launched a series of content pieces but I believe there's an issue.
Intermediate & Advanced SEO | | TyEl
Based on what I am seeing when I inspect the URL it looks like Google is only able to see a few titles and internal links. For instance, when I inspect one of the URLs on GSC this is the screenshot I am seeing: image.pngWhen I perform the "cache:" I barely see any content**:** image.pngVS one of our blog post image.png Would you agree with me there's a problem here? Is this related to the heavy use of JS? If so somehow I wasn't able to detect this on any of the crawling tools? Thanks!0 -
Pages excluded from Google's index due to "different canonicalization than user"
Hi MOZ community, A few weeks ago we noticed a complete collapse in traffic on some of our pages (7 out of around 150 blog posts in question). We were able to confirm that those pages disappeared for good from Google's index at the end of January '18, they were still findable via all other major search engines. Using Google's Search Console (previously Webmastertools) we found the unindexed URLs in the list of pages being excluded because "Google chose different canonical than user". Content-wise, the page that Google falsely determines as canonical instead has little to no similarity to the pages it thereby excludes from the index. False canonicalization About our setup: We are a SPA, delivering our pages pre-rendered, each with an (empty) rel=canonical tag in the HTTP header that's then dynamically filled with a self-referential link to the pages own URL via Javascript. This seemed and seems to work fine for 99% of our pages but happens to fail for one of our top performing ones (which is why the hassle 😉 ). What we tried so far: going through every step of this handy guide: https://moz.com/blog/panic-stations-how-to-handle-an-important-page-disappearing-from-google-case-study --> inconclusive (healthy pages, no penalties etc.) manually requesting re-indexation via Search Console --> immediately brought back some pages, others shortly re-appeared in the index then got kicked again for the aforementioned reasons checking other search engines --> pages are only gone from Google, can still be found via Bing, DuckDuckGo and other search engines Questions to you: How does the Googlebot operate with Javascript and does anybody know if their setup has changed in that respect around the end of January? Could you think of any other reason to cause the behavior described above? Eternally thankful for any help! ldWB9
Intermediate & Advanced SEO | | SvenRi1 -
Best way to "Prune" bad content from large sites?
I am in process of pruning my sites for low quality/thin content. The issue is that I have multiple sites with 40k + pages and need a more efficient way of finding the low quality content than looking at each page individually. Is there an ideal way to find the pages that are worth no indexing that will speed up the process but not potentially harm any valuable pages? Current plan of action is to pull data from analytics and if the url hasn't brought any traffic in the last 12 months then it is safe to assume it is a page that is not beneficial to the site. My concern is that some of these pages might have links pointing to them and I want to make sure we don't lose that link juice. But, assuming we just no index the pages we should still have the authority pass along...and in theory, the pages that haven't brought any traffic to the site in a year probably don't have much authority to begin with. Recommendations on best way to prune content on sites with hundreds of thousands of pages efficiently? Also, is there a benefit to no indexing the pages vs deleting them? What is the preferred method, and why?
Intermediate & Advanced SEO | | atomiconline0 -
Should pages with rel="canonical" be put in a sitemap?
I am working on an ecommerce site and I am going to add different views to the category pages. The views will all have different urls so I would like to add the rel="canonical" tag to them. Should I still add these pages to the sitemap?
Intermediate & Advanced SEO | | EcommerceSite0 -
Can you recover from "Unnatural links to your site—impacts links" if you remove them or have they already been discounted?
If Google has already discounted the value of the links and my rankings dropped because in the past these links passed value and now they don't. Is there any reason to remove them? If I do remove them, is there a chance of "recovery" or should I just move forward with my 8 month old blogging/content marketing campaign.
Intermediate & Advanced SEO | | Beastrip0 -
Client site is lacking content. Can we still optimize without it?
We just signed a new client whose site is really lacking in terms of content. Our plan is to add content to the site in order to achieve some solid on-page optimization. Unfortunately the site design makes adding content very difficult! Does anyone see where we may be going wrong? Is added content really the only way to go? http://empathicrecovery.com/
Intermediate & Advanced SEO | | RickyShockley0 -
Does Google read texts when display=none?
Hi, In our e-commerce site on category pages we have pagination (i.e toshiba laptops page 1, page 2 etc.). We implement it with rel='next' and 'prev' etc. On the first page of each category we display a header with lots of text information. This header is removed on the following pages using display='none'. I wondered if since it is only a css display game google might still read it and consider duplicated content. Thanks
Intermediate & Advanced SEO | | BeytzNet0 -
ECommerce products duplicate content issues - is rel="canonical" the answer?
Howdy, I work on a fairly large eCommerce site, shop.confetti.co.uk. Our CMS doesn't allow us to have 1 product with multiple colour and size options so we created individual product pages for each product variation. This of course means that we have duplicate content issues. The layout of the shop works like this; there is a product group page (here is our disposable camera group) and individual product pages are below. We also use a Google shopping feed. I'm sure we're being penalised as so many of the products on our site are duplicated so, my question is this - is rel="canonical" the best way to stop being penalised and how can I implement it? If not, are there any better suggestions? Also, we have targeted some long-tail keywords in some of the product descriptions so will using rel-canonical effect this or the Google shopping feed? I'd love to hear experiences from people who have been through similar things and what the outcome was in terms of ranking/ROI. Thanks in advance.
Intermediate & Advanced SEO | | Confetti_Wedding0