Feature

Utilising the search engine for maximum traffic

Understanding search engines can help raise the status and bring more traffic to your website

Search engines can be broadly divided into two categories: spider-based search engines and directory (human operative-based) search engines. The latter rely on people surfing the net and reviewing the sites they find. The former uses data retrieval software tools to go out and find information.

If a website's traffic is low, there are usually numerous reasons: poor design or navigation; lack of interesting content; bad coding which crashes browser; or perhaps, invisibility.

The web is a big "place". It's not just a question of being "out there", but of being visible to your audience. This doesn't necessarily mean spending millions on an advertising campaign. But it does mean designing your site in such a way as to make it visible for the search engines. Any site will probably get a search engine listing somewhere. What is important is whether it is ranked at or near the top of the results or on page 15. If it is not on the first page, only about five per cent of potential visitors will click on the link and visit your site.

The good news is that small changes can make a lot of difference. There are some fairly dubious methods of getting listed, but these generally will only work in the short term as spider search engines have spam tools to prevent them being tricked into listing a site because it has multiple identical meta-tags which trigger listings.

On the subject of metatags, a little reminder. It is illegal to use a competitor's business name in your metatags without just cause, or to draw people onto your site through such trickery. It's also not good for the company's reputation and is more likely to end in litigation than traffic if you are caught doing so.

Why do search engines work?

Search engines do have the potential to generate a lot of traffic for your site, particularly if you have a service or a product that isn't generally available. This is because the Web is so big that when Joe Public wants to buy or find out about something, the easiest way is often to key in that subject or the name of a search engine.

Bigger search engines have more listings and are more up-to-date because they can afford to overhaul their directory entries more often. This is a definite advantage because the Web changes so often and you want to know that as soon as your new content is put onto the Web, it will start appearing within the search engine's results.

Some search engines use a mixture of both human operatives and spiders to find listings. One very well known directory that does this is Yahoo! This has both advantages and disadvantages. If your web page is well designed so that the spiders can pick up keywords or subjects to report back, then you are likely to find that true (non-human) search engines will give you a good report. However, if your web pages are less well-designed and the spiders miss the important bits, you have a chance that if a human editor looks at the site, they will pick up vital details that may improve your ranking. The downside of this is, of course, that the human editor will take longer to note changes to your site, that is, if they ever get round to changing the listing at all.

How do they work

Search engines use spiders or crawlers that visit websites, read them, follow links and see what's there. The spider then revisits every month or so and looks for changes. It then reports back to what is called the index. The index is the Web equivalent of the Doomsday Book. It keeps a copy of every web page within it and when the web pages change, the changes are made in the index.

Unfortunately, it does take time between the spider visiting your site and it turning up within the index. All those pages that are listed in the index are available for searching with the search engine, but it is not until the spider has reported back to the index that this can happen.

Search engines also have another set of software tools that searches through the index and ranks results according to what it considers most relevant. Different search engines have different slants to their rankings. For example a search engine aimed at researchers (like Northern light) will list more academic papers than one aimed at home users (like AOL NetFind for example.)

Why do they sometimes get it wrong?

A lot of the time when search engines fail to match the requirements of their users it can be attributed to users not knowing how to search properly. For example, if someone wants to buy a part for their car they may well search under the term "oil filter". They might be lucky and find what they want, but they may just as easily get a list of oil sellers and oilrigs.

Search engines can't ask questions. If the same user telephoned Yellow Pages and said he was looking for an oil filter, the operator who makes intelligent guesses would assume that what he was looking for was a car repair company or a car spares supplier.

For this reason search engines list pages that have the keywords in their title. So, for example, if you keywords said "Oil filters and carburettors" then it would be likely to come out as one of the first entries. Search engines may also look at what position on the page these terms come. They would, for example, consider a word within the first paragraph as more important in searching terms than one buried deep at the bottom of the page.

Search engines also examine the frequency of words. So the ideal oil filter website would have the headline "Company X's Oil Filters" and would follow with a paragraph describing what sort of "oil filters" were available, that your company could fit "oil filter" etc.

However, different search engines weight different criteria differently so the same search terms, put into two different search engines, will bring wildly different results. For example, Excite uses popularity as a measure. If your site has a lot of upstream links, it will come out better on Excite.

Search engines with directories attached may give greater preference to those sites that they have reviewed. This is because they consider that if a site was good enough to merit a review, it's content is likely to be more relevant to search engine users.

Many web designers mistakenly believe that by including hundreds of meta-tags to put their site at the top of the search engine. However, this is unwise if it is the only method used to improve ranking. Many sites never use meta-tags and still get a very high rating in search engines. Whether or not meta-tags work depends on which search engine potential customers use. Lycos ignores meta-tags, whereas Infoseek uses them. Meta-tags will never make up for a badly designed site that does not contain keywords within the title of first paragraph.

The other reason not to use meta-tags is that some search engines have anti-spam devices built into them. This practice is called stuffing or stacking. If your site repeats the word "encryption" hundreds of times, either in meta-tags or within the body copy (in order to boost its rating), most search engines will ignore it. Search engines also follow up complaints from users and will de-list sites that use this practice.

Effective ways of improving your ranking

No one way will ever work for every single search engine, which is why it's vital to take search engines into account when planning websites.

The first task is to select keywords that will describe your site to the search engines. The words you think potential searchers will search for are the words you should pick. For example, let's say you sell plastic widgets; your keywords would be "plastic widgets". Each page can have different keywords, for example, if another page advertises rubber widgets then your keywords should reflect this. It's unwise to use just one word for your keyword, even a very specific word has many interpretations, and your site is liable to be swamped amongst many others that do have very specific strings of words.

Search engines pay more heed to keywords high up in the page. Bear in mind that tables can appear less relevant to search engines because they break apart when search engines read them. JavaScript can have a similar affect on search engine robots.

The content of your pages, and changes to it, will only improve your chances if those changes are relevant to the keywords. Adding meta-tags will not help this if the page has nothing to do with the topic it claims to detail in the keywords.

Don't put very small font sizes or text in the same colour as the background to spam the search engines, if it's not visible, most search engines won't index it.

Finally, think about expanding text references where it is appropriate. For example, if a site sells organic food, expanding references to "organic food" and "organic vegetable" may help reinforce your keywords in a natural manner, which is of course how people search.

The good news

Even if a site doesn't get a high ranking, it doesn't mean it will have low traffic. Top keywords are used by many of the top sites (try searching under "books" for a demonstration of this). Alternative forms of traffic generation, whether by traditional marketing or e-recommendations, have a significant role to play in keeping the traffic flowing.

Rachel Hodgkins


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in November 1999

 

COMMENTS powered by Disqus  //  Commenting policy