Trending December 2023 # Google Search Console Updated With Core Web Vitals Report # Suggested January 2024 # Top 13 Popular

You are reading the article Google Search Console Updated With Core Web Vitals Report updated in December 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Google Search Console Updated With Core Web Vitals Report

Google’s Core Web Vitals, a set of metrics deemed essential to delivering a good user experience, now have their own report in Search Console.

Core Web Vitals were first introduced earlier this month as a way to measure the quality of the user experience provided by a website.

Google considers these metrics “critical” to all web experiences, and is now providing site owners with an easy way to measure them.

See: Google’s Core Web Vitals to Become Ranking Signals

Measuring Core Web Vitals in Search Console

Google is rolling out a Core Web Vitals report in Search Console which will replace the old Speed report.

Replacing the Speed report with the Core Web Vitals report goes to show how Google’s thinking has evolved regarding user experience.

In order to provide a good user experience, according to Google, a site needs to meet certain expectations for loading, interactivity, and visual stability.

With that said, let’s take a look at what exactly are the Core Web Vitals.

What are the Core Web Vitals?

These three metrics represent the 2023 Core Web Vitals:

Largest Contentful Paint: measures perceived load speed and marks the point in the page load timeline when the page’s main content has likely loaded.

An ideal speed is 2.5 seconds or faster.

First Input Delay: measures responsiveness and quantifies the experience users feel when trying to first interact with the page.

An ideal measurement is less than 100 seconds.

Cumulative Layout Shift: measures visual stability and quantifies the amount of unexpected layout shift of visible page content.

An ideal measurement is less than 0.1.

Why are these metrics more important than others?

Google rationalizes choosing these metrics as the Core Web Vitals because they: capture important user-centric outcomes, are measurable, and have supporting lab diagnostic metric equivalents.

Reading the Core Web Vitals Report

Here’s how to make sense of what you see in the new report.

The Core Web Vitals report shows URL performance grouped by status, metric type, and URL group (groups of similar web pages).

On the Overview tab you can toggle between ‘Poor,’ ‘Needs Improvement,’ or ‘Good’ tabs.

If this sounds similar to navigating other reports in Search Console, it’s because the Core Web Vitals report works exactly the same way,

Improving Core Web Vitals

Google recommends fixing everything labeled “Poor” first, then prioritize what to do next based on issues affecting the most URLs.

Non-technical users may need the assistance of a developer to fix specific issues.

If that’s the case, then you can download the and of the reports and send them to the person assisting you.

Google says some of the most common page fixes should include:

Reduce your page size to less than 500KB.

Limit the number of page resources to 50.

Consider using AMP.

Like other Search Console reports, when an issue is fixed it can be validated directly within the Search Console report.

Source: Google Search Console Help

You're reading Google Search Console Updated With Core Web Vitals Report

Google Discusses Grouping Urls For Core Web Vitals Scores

In a Google office hours hangout the question was asked if similar kinds of pages, like category pages, had aggregated core web vitals scores. John Mueller answered

A person asked if similar pages were grouped and scored together. The reason this matters is because some sections might not have enough visits by Chrome users to provide data back to Google for core web vitals scoring.

The question asked was:

“John, you don’t group URLs by type, do you? Because we’ve noticed something similar, that category pages also don’t have enough Chrome views in order to give perfect data.

But we get messages saying these are similar pages.”

John Mueller’s answer was fast and without any ambiguity.

He answered:

“Yeah. We do that. We do that with the Chrome User Experience Report data, the field real-world data, essentially, where we try to recognize when there are pages that are similar enough that we could group them together.”

So what John is saying is that if Google is grouping similar pages together.

Next he follows up and talks about the scoring of those groups.


“And then that could be… I don’t know how that would look in practice.

It could be something where all of your category pages are in one group and we say, well, these pages perform similarly. So if we find a new URL that is also a part of this group, we don’t have to have data for that new URL. We can rely on the data for the group overall.”

That confirms that if Google doesn’t have core web vitals data for an individual pages that Google will simply assign it the overall score for the group that it is assigned to.

Next Mueller discusses how this might create anomalies in the Google Search Console report.


“And I think that sometimes throws things off a little bit in the sense of we might have one group, essentially, for a site. But that could contain thousands of URLs.

So, in the report in Search Console, I think we would report that as thousands of URLs have this problem.

And then we just show that one part of the group, essentially.

But not seeing any data at all in one report and seeing a lot of data in the other report, that feels kind of weird.”

Core Web Vitals Group Scoring

That Google groups URLs together for scoring is something to keep in mind.

Aggregating scores could explain why some publishers and SEOs might not see fixes acknowledged in their core web vitals report if just a section of a site is addressed for a fix but not the entire site.

The scores of the fixed sections might get aggregated with the scores of sections that are being crawled and scored.


Google’s Core Web Vitals Badge Likely Won’t Happen

Google says there are no plans for a Core Web Vitals badge in search results after proposing the idea when the metrics were first introduced.

This is stated by Google’s Search Advocate John Mueller during the Google Search Central SEO office-hours hangout recorded on January 21.

A question was submitted asking for an update on the Core Web Vitals badge and whether it’s something that will be rolled out in the future.

It was never 100% confirmed there would be a Core Web Vitals badge in SERPs, but it was an idea Google mentioned on numerous occasions.

Now it sounds like Google won’t be following through on its idea.

Read Mueller’s full response in the section below.

No Plans For A Core Web Vitals Badge In Search Results

Mueller says he can’t promise a CWV badge will never happen, but chances aren’t good.

Since the badge hasn’t rolled out yet, and the idea was first proposed over a year ago, the feeling is that it won’t happen.

“I can’t promise on what will happen in the future, unfortunately. And since we haven’t done this badge so far, and it’s been like over a year, my feeling is probably it will not happen.

I don’t know for certain, and it might be that somewhere a team at Google is making this badge happen and will get upset when I say it, but at least so far I haven’t seen anything happening with regards to a badge like this.

And my feeling is, if we wanted to show a badge in the search results for Core Web Vitals or Page Experience, then probably we would have done that already.”

Muller brings up the fact that Core Web Vitals and Page Experience are always evolving.

The Core Web Vitals metrics, as they are defined today, may include different measurements in the future. It depends what users care about.

“That said, everything around Core Web Vitals and Page Experience is constantly being worked on. And we’re trying to find ways to improve those metrics to include other aspects that might be critical for websites or for users that they care about.

So I wouldn’t be surprised if any of this changes. And it might be that, at some point, we have metrics that are really useful for users, and which make sense to show more to users, and maybe at that point we’ll have something more visible the search results, or within Chrome, or I don’t know. It’s really hard to say there.”

My interpretation of Mueller’s response is that a Core Web Vitals badge in search results isn’t an ideal solution, considering the criteria for earning the badge may change from one year to another.

If the Core Web Vitals were a set of metrics that would remain the same from year to year then a badge might make more sense, but that’s not the case.

Hear Mueller’s response in the video below:

 Featured Image: Screenshot from chúng tôi January 2023. 

Advanced Core Web Vitals: A Technical Seo Guide

Real humans want good web experiences. What does that look like in practice?

Well, one recent study cited by Google in a blog post about Core Web Vitals found that mobile web users only kept their attention on the screen for 4-8 seconds at a time.

Read that again.

You have less than 8 seconds to deliver interactive content and get a user to complete a task.

Enter Core Web Vitals (CWV). These three metrics are designed to measure site performance in human experience. The open-source Chromium project announced the metrics in early May 2023 and they were swiftly adopted across Google products.

How do we qualify performance in user-centric measurements?

Is it loading?

Can I interact?

Is it visually stable?

Fundamentally, Core Web Vitals measure how long it takes to complete the script functions needed to paint the above-the-fold content. The arena for these Herculean tasks is a 360 x 640 viewport. It fits right in your pocket!

This war-drum for unaddressed tech debt is a blessing to a lot of product owners and tech SEO professionals who have been backlogged in favor of new features and shiny baubles.

Is the Page Experience update going to be Mobileggedon 4.0?

Probably not.

The Page Experience Update

For all the buzz, CWV will be elements in a ranking signal. Expected to roll out gradually mid-June through August 2023, the Page Experience Ranking will be comprised of:

Updated documentation clarifies that the rollout will be gradual and that “sites generally should not expect drastic changes.”

Important things to know about the update:

Page Experience is evaluated per URL.

Page experience is based on a mobile browser.

AMP is no longer required for Top Stories carousels.

Passing CWV is not a requirement to appear in Top Stories carousels.

A New Page Experience Report In Search Console

Search Console now includes a Page Experience report. The fresh resource includes backdated data for the last 90 days.

In order for a URL to be “Good,” it must meet the following criteria:

The URL has Good status in the Core Web Vitals report.

The URL has no mobile usability issues according to the Mobile Usability report.

The site has no security issues.

The URL is served over HTTPS.

The site has no Ad Experience issues, or the site was not evaluated for Ad Experience.

The new report offers high-level widgets linking to reports for each of the five “Good” criteria.

Workflow For Diagnosing & Actioning CWV Improvements

First, an important caveat regarding Field vs Lab data.

Core Web Vitals assessments and the Page Experience Ranking Signal will use Field Data gathered by the Chrome User Experience Report (Crux).

Which Users Are Part Of The Chrome User Experience Report?

Crux data is aggregated users who meet three criteria:

The user opted-in to syncing their browsing history.

The user has not set up a Sync passphrase.

The user has usage statistic reporting enabled.

Crux is your source of truth for Core Web Vitals Assessment.

You can access Crux data using Google Search Console, PageSpeed Insights (page-level), Public Google BigQuery project, or as an origin-level dashboard in Google Data Studio.

Why would you use anything else? Well, CWV Field Data is a restricted set of metrics with limited debugging capabilities and requirements for data availability.

Why Doesn’t My Page Have Data Available From Crux?

When testing your page, you may see “The Chrome User Experience Report does not have sufficient real-world speed data for this page.”

Web Core Vitals are best identified using field data and then diagnosed/QAed using lab data.

Lab Data allows you to debug performance with end-to-end and deep visibility into UX. It’s called “lab” as this emulated data is collected within a controlled environment with predefined device and network settings.

You can get lab data from PageSpeed Insights, chúng tôi Chrome DevTool’s Lighthouse panel, and Chromium-based crawlers like a local NodeJS Lighthouse or DeepCrawl.

Let’s dive into a workflow process.

1. Identify Issues With Crux Data Grouped By Behavior Patterns In Search Console.

Start with Search Console’s Core Web Vitals report to identify groups of pages that require attention. This data set uses Crux data and does you the kindness of grouping together example URLs based on behavior patterns.

If you solve the root issue for one page, you’re likely to fix it for all pages sharing that CWV woe. Typically, these issues are shared by a template, CMS instance, or on-page element. GSC does the grouping for you.

Focus on Mobile data, as Google is moving to a Mobile-First Index and CWV is set to affect mobile SERPs. Prioritize your efforts based on the number of URLs impacted.

Save these example URLs for testing throughout the improvement process.

2. Use PageSpeed Insights To Marry Field Data With Lab Diagnostics.

Once you’ve identified pages that need work, use PageSpeed Insights (powered by Lighthouse and Chrome UX Report) to diagnose lab and field issues on a page.

Remember that lab tests are one-off emulated tests. One test is not a source of truth or a definitive answer. Test multiple example URLs.

PageSpeed Insights can only be used to test publicly available and indexable URLs.

If you’re working on noindex or authenticated pages, Crux data is available via API or BigQuery. Lab tests should use Lighthouse.

3. Create A Ticket. Do The Development Work.

I encourage you as SEO professionals to be part of the ticket refinement and QA processes.

Development teams typically work in sprints. Each sprint includes set tickets. Having well-written tickets allows your development team to better size the effort and get the ticket into a sprint.

In your tickets, include:

Follow a simple format:

Eg.: As a performant site, I want to include inline CSS for node X on page template Y in order to achieve the largest contentful paint for this page template in under 2.5 seconds.

Define when the goal has been achieved.  What does “done” mean? Eg.:

Think about which tool is used, what metric/marker to look for, and the behavior indicating a pass or fail.

Use first-party documentation when available. Please no fluffy blogs. Please? Eg.:

4. QA Changes In Staging Environments Using Lighthouse.

Before code is pushed to production, it’s often put in a staging environment for testing. Use Lighthouse (via Chrome DevTools or local node instance) to measure Core Web Vitals.

If you’re new to testing with Lighthouse, you can learn about ways to test and testing methodology in A Technical SEO Guide to Lighthouse Performance Metrics.

Keep in mind that lower environments typically have fewer resources and will be less performant than production.

Rely on the acceptance criteria to home in on whether the development work completed met the task given.

Largest Contentful Paint

Represents: Perceived loading experience.

Measurement: The point in the page load timeline when the page’s largest image or text block is visible within the viewport.

Key Behaviors: Pages using the same page templates typically share the same LCP node.

Available as: Lab and Field Data.

What Can Be LCP?

The LCP metric measures when the largest text or image element in the viewport is visible.

Possible elements that can be a page’s LCP node include:

background images loaded via url() CSS function.

Text nodes inside block-level elements.

How To identify LCP Using Chrome DevTools

Open the page in Chrome emulating a Moto 4G.

Navigate to the Performance panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).

Hover over the LCP marker in the Timings section.

The element(s) that correspond to LCP is detailed in the Related Node field.

What Causes Poor LCP?

There are four common issues causing poor LCP:

Slow server response times.

Render-blocking JavaScript and CSS.

Slow resource load times.

Client-side rendering.

Source issues for LCP are painted in broad strokes at best. Unfortunately, none of the single phrases above will likely be enough to pass along to your dev team with meaningful results.

However, you can give the issue momentum by homing in on which of the four origins is in play.

Improving LCP is going to be collaborative. Getting it fixed means sitting in on dev updates and following up as a stakeholder.

Diagnosing Poor LCP Because Of Slow Server Response Time

Where to look: Crux Dashboard v2 – Time to First Byte (TTFB) (page 6)

IF you see consistently poor TTFB in your field data, then it’s slow server response time dragging LCP.

How To Fix Slow Server Response Time

Server response time is made of numerous factors bespoke to the site’s technology stack. There are no silver bullets here. Your best course of action is to open a ticket with your development team.

Some possible ways to improve TTFB are:

Optimize the server.

Route users to a nearby CDN.

Cache assets.

Serve HTML pages cache-first.

Establish third-party connections early.

Diagnosing Poor LCP Because Of Render-Blocking JavaScript And CSS

Where to look: Lighthouse (via chúng tôi Chrome DevTools, Pagespeed Insights, or nodeJS instance). Each of the solutions below include a relevant audit flag.

How To Fix Render-blocking CSS

CSS is inherently render-blocking and impact critical rendering path performance. By default, CSS is treated as a render-blocking resource.

Minify CSS.

If your site uses a module bundler or build tool, find the plugin that will systematically minimize the scripts.

Defer non-critical CSS.

If the styles are used on another page, make a separate style sheet for those pages which use it to call.

Inline critical CSS.

Use Dynamic Media Queries.

Media queries are simple filters that when applied to CSS styles break out the styles based on the types of device rendering the content.

Using dynamic media queries means instead of calculating styles for all viewports, you’re calling and calculating those values to the requesting viewport.

How To Fix Render-Blocking JavaScript

Minification involves removing unneeded whitespace and code. It’s best done systematically with a JavaScript compression tool.

Compression involves algorithmically modifying the data format for performant server and client interactions.

Remember how Googlebot was stuck in Chrome 44 for what felt like centuries? A polyfills is a piece of code used to provide modern functionality on older browsers that do not natively support it.

Now that Googlebot is evergreen, it also goes by the name tech debt.

Some compilers have built-in functionalities to remove legacy polyfills.

How To Fix Render-Blocking Third-Party Scripts

Delay it.

If the script does not contribute to above the fold content, use async or defer attributes.

Remove it.

Consolidate it.

Audit third-party script use. Who is in charge of the tool? A third-party tool without someone managing it is also known as a liability.

What value does it provide? Is that value greater than the impact on performance? Can the result be achieved by consolidating tools?

Update it.

Another option may be to reach out to the provider to see if they have an updated lean or asynchronous version. Sometimes they do and don’t tell folks that have their old implementation.

Diagnosing Poor LCP Because Of Slow Resource Load Times

Where to look: Lighthouse (via chúng tôi Chrome DevTools, Pagespeed Insights, or nodeJS instance). Each of the solutions below includes a relevant audit flag.

Browsers fetch and execute resources as they discover them. Sometimes our paths to discovery are less than ideal. Other times the resources aren’t optimized for their on-page experiences.

Here are ways you can combat the most common causes of slow resource load times:

No one needs a 10MB png file. There’s rarely a use case for shipping a large image file.  Or a png.

If a resource is part of the critical path, a simple rel="preload" attribute tells the browser to fetch it as soon as possible.

Encode, compress, repeat.

Diagnosing Poor LCP Because Of Client-Side Rendering

Where to look: For one-off glances, view the page source. If it’s a couple of lines of gibberish, the page is client-side rendered.

Elements within a page can be client-side rendered. To spot which elements, compare the initial page source with the rendered HTML. If you’re using a crawler, compare the rendered word count difference.

Core Web Vitals are a way of measuring how effective our rendering strategies are.

All rendering options have the same output (they all build web pages), but CWV metrics measure how quickly we deliver what matters when it matters.

Client-side rendering is rarely the answer unless the question is, “What changes went into production at the same time that organic traffic began to tumble?”

How To Fix Client-Side Rendering

“Stop” really isn’t a useful answer. Accurate, but not useful. So instead:

Use code splitting, tree shaking, and inline functions in the head for above-the-fold functionalities. Keep those inline scripts <1kb.

By having your servers execute JS elements, you can return fully rendered HTML. Note that this will increase your TTFB as the scripts are executed before your server responds.

At build time, execute your scripts and have rendered HTML ready for incoming requests. This option has a better server response time but won’t work for sites with frequently changing inventory or prices.

To be clear: Dynamic rendering is not a solution to client-side rendering. It’s giving the troubles of client-side rendering a friend.

First Input Delay (FID)

Represents: Responsiveness to user input.

Measurement: The time from when a user first interacts with a page to the time when the browser is actually able to begin processing event handlers in response to that interaction.

Key behaviors: FID is only available as field data.

Available as: Field Data.

Use Total Blocking Time (TBT) For Lab Tests

Since FID is only available as lab data, you’ll need to use Total Blocking Time for lab tests. The two achieve the same end result with different thresholds.

TBT represents: Responsiveness to user input.

TBT measurement: Total time in which the main thread is occupied by tasks taking more than 50ms to complete.

Goal: <= 300 milliseconds.

Available as: Lab Data.

What Causes Poor FID?

const jsHeavy = true; While (jsHeavy) { console.log("FID fail") }

Heavy JavaScript. That’s it.

Poor FID comes from JS occupying the main thread which means your user’s interactions are forced to wait.

What On-Page Elements Are Impacted By FID?

FID is a way of measuring main thread activity. Before on-page elements can respond to user interaction, in-progress tasks on the main thread have to complete.

Here are some of the most prevalent elements that your user is tapping in frustration:

Text fields.


Where to look: To confirm it’s an issue for users look at Crux Dashboard v2 – First Input Delay (FID) (page 3). Use Chrome DevTools to identify the exact tasks.

How To See TBT Using Chrome DevTools

Open the page in Chrome.

Navigate to the Network panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).

Tick the box to disable cache.

Navigate to the Performance Panel and check the box for Web Vitals.

Look for the blue blocks labeled Long Tasks or the red right corner markers in the right-hand corner of tasks. These indicate long tasks that took more than 50ms.

Find the TBT for the page at the bottom of the summary.

How To Fix Poor FID

Stop loading so many third-party scripts.

Third-party code puts your performance behind another team’s stack.

You’re dependent on their scripts executing in a succinct, performant manner in order for your side to be considered performant.

Free up the main thread by breaking up Long Tasks.

If you’re shipping one massive JS bundle for every page, there’s a lot of functionalities in that bundle that don’t contribute to the page.

Even though they’re not contributing, each JS function has to be downloaded, parsed, compiled, and executed.

By breaking out that big bundle into smaller chunks and only shipping those that contribute, you’ll free up the main thread.

Check your tag manager.

Out-of-the-box tag deployment of tags fire event listeners that will tie up your main thread.

Tag managers can be long-running input handlers that block scrolling. Work with developers to debounce your input handlers.

Optimize your page for interaction readiness.

Ship and execute those JS bundles in an order that matters.

Is it above the fold? It gets prioritized. Use rel=preload.

Pretty important but not enough to block rendering? Add the async attribute.

Below the fold? Delay it with the defer attribute.

Use a web worker.

Web workers allow JavaScript to run on a background thread rather than the main thread your FID is scored on.

If you’re shipping one massive JS bundle for every page, there’s a lot of functionalities in that bundle that don’t contribute to the page.

Even though they’re not contributing, each JS function has to be downloaded, parsed, compiled, and executed.

Cumulative Layout Shift

Represents: Visual stability.

Measurement: A calculation based on the number of frames in which element(s) visually moves and the total distance in pixels the element(s) moved.

layout shift score = impact fraction * distance fraction

Key behaviors: CLS is the only Core Web Vital not measured in time. Instead, CLS is a calculated metric. The exact calculations are actively being iterated on.

Available as: Lab and Field Data.

Diagnosing Poor CLS

Where to look: To confirm it’s an issue for users look at Crux Dashboard v2 – Cumulative Layout Shift (CLS) (page 4). Use any tool with Lighthouse to identify the bouncing element(s).

Chrome DevTools will provide greater insights into the coordinates of the excitable element and how many times it moves.

How To See CLS Using Chrome DevTools

Open the page in Chrome.

Navigate to the Network panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).

Tick the box to disable cache.

Navigate to the Performance Panel and check the box for Web Vitals.

Look for the name of the node, highlighting of the node on page, and the coordinates for each time the element moved.

What Can Be Counted In CLS?

If an element appears in the initial viewport, it becomes part of the metric’s calculation.

If you load your footer before your primary content and it appears in the viewport, then the footer is part of your (likely atrocious) CLS score.

What Causes Poor CLS?

Is it your cookie notice? It’s probably your cookie notice.

Alternatively, look for:

Images without dimensions.

Ads, embeds, and iframes without dimensions.

Dynamically injected content.

Web Fonts causing FOIT/FOUT.

Chains for critical resources.

Actions waiting for a network response before updating DOM.

How To Fix Poor CLS

Always include width and height size attributes on images and video elements.

Best practice is to leverage user-agent stylesheets for systematically declared dimensions based on the image’s aspect ratio.

Reserve space for ad slots (and don’t collapse it).

Instead, identify the largest size ad that could be used in a slot and reserve space. If the ad doesn’t populate, keep the placeholder. The gap is better than a layout shift.

Avoid inserting new content above existing content.

An element shouldn’t enter the fighting arena unless it’s ready to be counted.

A font loading late causes a full flash and re-write.

Preload tells the browser that you would like to fetch it sooner than the browser would otherwise discover it because you are certain that it is important for the current page.

Avoid chains for resources needed to create above-the-fold content.

Chains happen when you call a resource that calls a resource. If a critical asset is called by a script, then it can’t be called until that script is executed.

Modern Browsers support speculative parsing off of the main thread.

Read as: They work ahead while scripts are being downloaded and executed – like reading ahead of assignments in a class. document.write() comes in and changes the textbook. Now that work reading ahead was useless.

Chances are this isn’t the work of your devs. Talk to your point of contact for that “magic” third-party tool.

The Future Of CWV Metrics

Google intends to update the Page Experience components on an annual basis. Future CWV metrics will be documented similarly to the initial signal rollout.

Core Web Vitals are already 55% of your Lighthouse v7 score.

Currently, Largest Contentful Paint (LCP) and First Input Delay (FID) are each weighted at 25%. Cumulative Layout Shift is a meager 5% but we can expect to see these equalize.

Smart money is on Q4 2023 once the Chromium team hones in the metric’s calculation.

As technical SEO pros, we’re able to diagnose and provide solutions for a better user-centric experience. Here the thing – those investments and improvements impact all users.

The ROI can be found in every medium. Every channel.

Most importantly:

/   づ

Image Credits

All screenshots taken by author

Google Launches Search Console Insights

Google is introducing a new experience called Search Console Insights which is designed to help site owners better understand their audience.

This experience joins data from both Search Console and Google Analytics in a joint effort to make it easy to understand content performance.

Data in Search Console Insights will help site owners answer question such as:

What are your best performing pieces of content, and which ones are trending?

How do people discover your content across the web?

What do people search for on Google before they visit your content?

Which article refers users to your website and content?

Site owners can access Search Console Insights via the new link at the top of the Overview page. Soon it will be accessible from Googles iOS app, with support for the Android app being planned as well.

Another way to access the data is by searching Google for a query that your site ranks for. This will return a Google-powered result at the top of the page titled “Search performance for this query.”

It’s possible to utilize Search Console Insights without Google Analytics, though it’s necessary to link the two in order to get the full experience.

Search Console Insights only supports Google Analytics UA properties at this time, though the company is working to support Google Analytics 4.

This new experience will gradually be rolled out to all Search Console users in the upcoming days.

Almost a Year of Testing

Google has been testing Search Console Insights for nearly a year. We covered the launch of a closed beta test back in August 2023.

It appears the tool is still in its beta testing stage. The main difference between the two rollouts is Search Console Insights will soon be available to everyone, whereas last year it was available by invite only.

Aside from availability, there’s no announced changes between the version that was available in August 2023, and the version that will be available in the coming days.

It’s reasonable to think Google may have tweaked a few things during that time, but the company doesn’t highlight any updates.

Look for this new data available soon in your Search Console dashboard.

Source: Google Search Central Blog

How To: Use Wildcard Search With Various Google Services

Here’s how you can play with it in various Google services:

General Google Search + Wildcard

General Google search allows a lot of flexibility with its wildcard operator.

How it works: * is substituted by one or more words.

When it comes particularly in handy: In combination with “” (exact match) search to control the proximity within a set phrase. This trick can turn particularly useful for content inspiration as well as for keyword research (to expand your initial query):

You can also achieve unexpected results when using the wildcard operator in combination with other search commands. Try:

intext:”diabetic * diets”

intitle:”diabetic * diets”

“diabetic * diets” -food


Other Google Search Services + Wildcard

While many people are aware of wildcard search for “Universal” / “blended” results, few users also use the wildcard operator for other types of search results. Wildcard operator is also supported by multiple search engines run by Google:

Google images

Google video and Youtube;

Blog search;

Google news;

Google Shopping


How it works: * is substituted by one or more words.

When it comes particularly in handy:

Here are a few example of how the search operator can turn particularly useful:

Find video content inspiration; example: [“blogging * wordpress”]

Customize your Google News RSS feed (to use it to track your brand mentions or to monitor new opportunities); example: [“guest * post *”]

Expand your search to include various possible variations; for example, to track new articles by “guest author” (and thus track new guest blogging opportunities), use this query in Google Blog Search: [inpostauthor:”guest * author” OR inpostauthor:”guest author”]

Google Reader + Wildcard

How it works: * is substituted by one word. To get two words within your phrase, use two asterisks.

When it comes particularly in handy: Google Reader is your personal collection of relevant feeds. Using it for keyword and content inspiration may turn much more effective than using generic search results.

Gmail Search + Wildcard

How it works: * is substituted by one or more words.

When it comes particularly in handy: Gmail is another useful collection of resources and links dirctly related to you, what you read and what you are subscribed to. I have once shared how Gmail search can turn a great help in your keyword and content research. With wildcard, this idea is even more effective.

A wildcard operator can also turn a great help for searching Gmail attachments: filename:google*.doc – This one filters emails to only those that have doc files attached and these files have [google] in the beginning of the name (whereas filename:*google*.doc searches for messages that have documents attached with “google” mentioned somewhere in the middle of the file name).

Here’s the example set of this search and the results it triggers:

Now, go play with search results to your heart’d content!

Update the detailed information about Google Search Console Updated With Core Web Vitals Report on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!