ALL >> Search-Engine-Optimization >> View Article
Search Evolution And Seo At Google-00-1732
Now, you and I may think that this "letter T" hack is really a silly idea, but how can we know for sure? Search evaluation is charged with answering such questions. This hack hasn't really come up, but we are constantly evaluating everything, which can include:
* proposed improvements to segmentation of Chinese queries
* new approaches to fight spam
* techniques for improving how we handle compound Swedish word
* changes to how we handle links and anchortext
* and everything in between
Not surprisingly, we take search evaluation very seriously. Precise evaluation enables our teams to know "which way is up". One of our tenets in search quality is to be very data-driven in our decision-making. We try hard not to rely on anecdotal examples, which are often misleading in search (where decisions can affect hundreds of millions of queries a day). Meticulous, statistically-meaningful evaluation gives us the data we need to make real search improvements.
Evaluating search is difficult for several reasons.
* First, understanding what a user really wants when ...
... they type a query — the query's "intent" — can be very difficult. For highly navigational queries like [ebay] or [orbitz], we can guess that most users want to navigate to the respective sites. But how about [olympics]? Does the user want news, medal counts from the recent Beijing games, the IOC's homepage, historical information about the games, ? This same exact question, of course, is faced by our ranking and search UI teams. Evaluation is the other side of that coin.
* Second, comparing the quality of search engines (whether Google versus our competitors, Google versus Google a month ago, or Google versus Google plus the "letter T" hack) is never black and white. It's essentially impossible to make a change that is 100% positive in all situations; with any algorithmic change you make to search, many searches will get better and some will get worse.
* Third, there are several dimensions to "good" results. Traditional search evaluation has focused on the relevance of the results, and of course that is our highest priority as well. But today's search-engine users expect more than just relevance. Are the results fresh and timely? Are they from authoritative sources? Are they comprehensive? Are they free of spam? Are their titles and snippets descriptive enough? Do they include additional UI elements a user might find helpful for the query (maps, images, query suggestions, etc.)? Our evaluations attempt to cover each of these dimensions where appropriate.
* Fourth, evaluating Google search quality requires covering an enormous breadth. We cover over a hundred locales (country/language pairs) with in-depth evaluation. Beyond locales, we support search quality teams working on many different kinds of queries and features. For example, we explicitly measure the quality of Google's spelling suggestions, universal search results, image and video searches, related query suggestions, stock oneboxes, and many, many more.
To get at these issues, we employ a variety of evaluation methods and data sources:
* Human evaluators. Google makes use of evaluators in many countries and languages. These evaluators are carefully trained and are asked to evaluate the quality of search results in several different ways. We sometimes show evaluators whole result sets by themselves or "side by side" with alternatives; in other cases, we show evaluators a single result at a time for a query and ask them to rate its quality along various dimensions.
* Live traffic experiments. We also make use of experiments, in which small fractions of queries are shown results from alternative search approaches. Ben Gomes talked about how we make use of these experiments for testing search UI elements in his previous post. With these experiments, we are able to see real users' reactions (clicks, etc.) to alternative results.
Clearly, we can never measure anything close to all the queries Google will get in the future. Every day, in fact, Google gets many millions of queries that we have never seen before, and will never see again. Therefore, we measure statistically, over representative samples of the query-stream. The "letter T" hack probably does improve a few queries, but over a representative sample of queries it affects, I'm confident it would be a big loser.
One of the key skills of our evaluation team is experimental design. For each proposed search improvement, we generate an experiment plan that will allow us to measure the key aspects of the change. Often, we use a combination of human and live traffic evaluation. For instance, consider a proposed improvement to Google's "related searches" feature to increase its coverage across several locales. Our experiment plan might include live traffic evaluation in which we show the updated related search suggestions to users and measure click-through rates in each locale and break these down by position of each related search suggestion. We might also include human evaluation, in which for a representative sample of queries in each locale, we ask evaluators to rate the appropriateness, usefulness, and relevance of each individual related search suggestion. Including both types of evaluation allows us to understand the overall behavioral impact on users (via the live traffic experiment), and measure the detailed quality of the suggestions in each locale along multiple dimensions (via the human evaluation experiment).
Choosing an appropriate sample of queries to evaluate can be subtle. When evaluating a proposed search improvement, we consider not only whether a given query's results are changed by the proposal, but also how much impact the change is likely to have on users. For instance, a query whose first three results are changed is likely much higher impact than one for which results 9 and 10 are swapped. In Amit Singhal's previous post on ranking, he discussed synonyms. Recently, we evaluated a proposed update to make synonyms more aggressive in some cases. On a flat (non-impact-weighted) sample of affected queries, the change appeared to be quite positive. However, using an evaluation of an impact-weighted sample, we found that the change went much too far. For example, in Chinese, it synonymized "small"and "big" not a good idea!
We're serious about search evaluation because we are serious about giving you the highest quality search experience possible. Rather than guess at what will be useful, we use a careful data-driven approach to make sure our "great ideas" really are great for you. In this environment, the "letter T" hack never had a chance.
It is a fact that, algorithms play a very vital role in influencing the search results. But we are also aware of the fact that Google has human evaluators — thousands, in fact
-Scott Huffman, Engineering Director.
About the Author:
Learn about http://www.hirank.com/ at the www.hirank.com/blog/
Click here to read more on http://contentlib.co.za.
Add Comment
Search Engine Optimization Articles
1. Debunking Common Seo Myths: What Actually Works In 2025?Author: iDigitize
2. Drive Your Business Growth In New Zealand With Our Expert Seo Services
Author: Top Rank Digital
3. Maximize Your Online Presence With Seo And Ppc Services In New Zealand
Author: Top Rank Digital
4. Small Business Seo Service In New York: A Complete Guide To Growing Your Online Presence
Author: Captivate Designs
5. The Definitive Resource For Choosing The Right Marketing Agency In Miami
Author: Kickoofadvertising
6. Nova Tales Media : Digital Marketing Agency
Author: Taramalhotra
7. Empower Your Business Growth With Expert Seo Services In Auckland, Hamilton, And Christchurch
Author: Top Rank Digital
8. Looking For Reliable Seo Services In Kochi? Choose Globosoft!
Author: Seo Globo
9. Seo Services Only At 7000 || Holi Offer
Author: Brand Roof Solutions
10. Revolutionizing Campaign Strategies With Precision And Innovation
Author: Kedar Beyond Creation
11. Empower Your Website With Seamless Access And Seo Power
Author: EliteSiteOptimizer
12. Master Seo Success With Coaching In New York
Author: Captivate Designs
13. Newsindia9 - ,meet The Team
Author: Shhanya Madan Bhatia
14. How Important Are Search Engine Optimization (seo) Services For Business?
Author: ValueHits
15. The Role Of Analytics In Social Media Advertising Measuring What Matters
Author: Digital Hive Solution - Software and Digital Marke