With internet content growing at a staggering rate, search engines need to hone their approach to get the right results and unearth what the user wants.
Typing “woppit” into Google provides 507 results. In Yahoo, 350. In MSN search, 528. I have no idea what a woppit could possibly be, or what I would do with one – I just chose a made-up word to see what came of it. That such a word produces many results is strange enough – that we get hundreds does not bode well for when we are searching for something that matters.
Now, if I type something that I am more likely to be interested in, such as “saving for retirement”, I get 24,400,000 results from Google, 6,160,000 from Yahoo and 2,551, 184 from MSN. Great – somewhere in there is possibly something I would find useful. But where?
Search engine suppliers use algorithms to ensure that the most apposite results appear closer to the top of the list, but if the result I want is even in the top 1% of Google’s solution, I still need to go through more than 250,000 results.
To give me just 100 to go through would require Google to be 250,000 more selective – and would such selectivity guarantee that I got the matches that I was searching for?
With the continuing growth of web content (75,000 new blogs a day, never mind anything else) this can only get worse.
The problem is compounded by the use of paid-for placements. The top selections in many search engines are there because the company involved has paid money to try and ensure it is on the first page of searches that might be right for them.
However, most searches are too woolly to create a narrow set of results, which means that a lot of paid-for placements are presented wrongly and may push the right solution to page five or six of the search, generally beyond the attention span of most searchers.
There are two ways to address the problem. One is to retrain users so their search terms are more accurate and they understand Boolean searches, thesauri and taxonomical systems. This is the most elegant solution but it is unlikely to happen.
The other is to look at how current search engines enhance the information they supply back to us. Many now allow a “find similar pages” function, and they all have advanced search capabilities which are made as easy as possible, but are used by very few users.
The one thing that seems to be startlingly absent from most is an iterative search capability, or being able to play with the existing search results – having got 24 million results for “saving for retirement”, I may want to narrow them down by refining the search, or I may want to rank by date, by geographical proximity to me, or whatever.
The major search engine suppliers have a lot of new functionality coming down the line – a lot of it based around multimedia and “revenuisation” of their capabilities (ie, making money out of a free service). I still use search engines for a lot of my daily work, but I have less faith in the results day by day.
There are many things that can be done to help in this way with little extra technical exertion by the search engine provider. Being able to enhance results by sorting and refining existing searches is pretty easy, for one. Other things may need more work, and no doubt we will see refinements continue to appear.
We no longer want a search engine, we need a find engine. I want the capability to drill down to apposite results rapidly. I have no interest in seeing that a search engine can find an order of magnitude more results than the next one – I want the 100 results that are right for me. But, like most people, I do not want to change the way I search. I want a solution that works with me to get to the desired point – rapidly.
Clive Longbottom is service director at analyst firm Quocirca