Spamdexing = spam + index

Gooogle’s Terms of Service

Doon’t abuse, harm, interfere with or disrupt the services – for example, by accessing or using them in fraudulent or deceptive ways, introducing malware or spamming, hacking or bypassing our systems or protective measures.

Spamdexing is as old as search engines themselves. They develop side by side.

As search engines are the source of visitiors, webmasters are constantly trying to attract more of them.

Where is the border between search engine optimisation and spamdexing?

The main criterion here is “made for people”. If a web site has an element, that is made for cheating a search engine rather than for being useful to people, the site will be punished, maybe even banned.

Meta keywords

The first trick was to include keywords that do not correspond to the content of a web page to “keywords” meta tag.


<meta keywords content="...">

Search engines in the 1990s indexed <meta keywords> then tried to find correlation to the content of the web page. Of course, the content was taken into consideration, but keywords were treated as a priority.

Spammers used popular keywords like “job”, “vacancy” etc.

The content of meta tags is not visible in the browser, Users searched for vacancies, got links to such pages from search engines, but got pages about web design, shampoos, creames etc.

Spamdexind spreads at breakneck speed. Pretty quickly meta tags of most sites were full of more or less the same set of high-volume keywords. That made search results absulutely not relevant to search queries.

Search engines stopped taking the meta tam “keywords” into consideration.

It was the first clash of interests of search engines and webmasters.

Pumping texts with keywords

Renewed algorythms were focused on the content of web pages. Search engines indexed the content. Then took the weight for each word: the ratio of how many times the keyword was found to total number of words in the text. Search results showed web pages sorted by the weights words.

Then search engines took into consideration the title, headlines, and text in bold. Search engines started trying to distinguish natural texts from consisting mainly of keywords.

When search engines started analysing texts, webmasters started experimening how to insert keywords into pages to achieve better results.

Hidden text

If a text is of white colour, and is printed on white background, users will not be able to read it. But the robots of search engines will be able. Then they will index it and consider it a s a part of the document. In the hidden text one can publish any keywords. This spamdexind trick is called “hidden text”. Or a text may be too small for a person to read.

As soon as this practice became common, search engines started banning such web site.


If the text is full of keywords, users will not read it. It looks like garbage. But the spammer is eager to show them some advertising where in plain text something is offered.

Therefore, spammers started creating two pages: one page is pumped with keywords and is indexed high, the second one is for showing commercial offer.

The first page is called “doorway”. The user is redirected automatically to the second page. Or there may be just one button on a page “Enter” or the like. A user has to choose: either to press the button or to quit the page.

This is a spectacular example of pages not for people.

Doorways are searched for and banned.

Doorways were a true nightmare for searche engines: there were hundreds and thousands of doorways per one page for people.


Search engines’ robots did not visit web sites often in bygone days. They scanned sites maybe once a month. If a search engine’s robot scanned a site, then the webmaster published on the site something completely different. It worked till the next visit of the robot.


When a search engine’s robot visits a web site, it introduces itself to the server. Therefore server may show one page to the robot, but something else to people (to swap the page).

Webmasters generated huge number of doorways, doorways were indexed, but if people visited the site, they page was swapped with a commercial offer.

Doubling the pages

The simplest way is to publish the same content on two domains. If domains contain the same content, they are called “mirrors”. Sites may differ in full or only partially.

Anyway, search engines will puhish for duplicating the content.

As soon as search engines started to consider inbound links as a factor, webmasters started selling links. There appeared special services organising selling of links.

There is no clear criterion for distinguishing spam links. And search engines can not ban a site for inbound links. As  anybody can publish or have published any link, there is no possibility to lay the blame for organising the inbound links at a webmaster’s door. Maybe it was a competitor of that webmaster?

Some search engines (Yandex) refused to take links into consideration for commercial search queries. As for informational queries, links seem to work so far.


Rate article
Add a comment