Spamdexing
From Wikipedia, the free encyclopedia
This article includes a list of references or external links, but its sources remain unclear because it has insufficient inline citations. Please help to improve this article by introducing more precise citations where appropriate. (December 2008) |
Spamdexing (also known as search spam or search engine spam)[1] involves a number of methods, such as repeating unrelated phrases, to manipulate the relevancy or prominence of resources indexed by a search engine, in a manner inconsistent with the purpose of the indexing system.[2][3] Some consider it to be a part of search engine optimization, though there are many search engine optimization methods that improve the quality and appearance of the content of web sites and serve content useful to many users.[4] Search engines use a variety of algorithms to determine relevancy ranking. Some of these include determining whether the search term appears in the META keywords tag, others whether the search term appears in the body text or URL of a web page. Many search engines check for instances of spamdexing and will remove suspect pages from their indexes. Also, people working for a search-engine organization can quickly block the results-listing from entire websites that use spamdexing, perhaps alerted by user complaints of false matches. The rise of spamdexing in the mid-1990s made the leading search engines of the time less useful.
The success of Google at both producing better search results and combating keyword spamming, through its reputation-based PageRank link analysis system, helped it become the dominant search site late in the 1990s. Although it has not been rendered useless by spamdexing, Google has not been immune to more sophisticated methods. Google bombing is another form of search engine result manipulation, which involves placing hyperlinks that directly affect the rank of other sites.[5] Google first algorithmically combated Google bombing on January 25, 2007.[6]
The earliest known reference[2] to the term spamdexing is by Eric Convey in his article "Porn sneaks way back on Web," The Boston Herald, May 22, 1996, where he said:
The problem arises when site operators load their Web pages with hundreds of extraneous terms so search engines will list them among legitimate addresses. The process is called "spamdexing," a combination of spamming — the Internet term for sending users unsolicited information — and "indexing." [2]
Common spamdexing techniques can be classified into two broad classes: content spam[4] (or term spam) and link spam.[3]
Contents |
[edit] Content spam
These techniques involve altering the logical view that a search engine has over the page's contents. They all aim at variants of the vector space model for information retrieval on text collections.
- Keyword stuffing
- This involves the calculated placement of keywords within a page to raise the keyword count, variety, and density of the page. This is useful to make a page appear to be relevant for a web crawler in a way that makes it more likely to be found. Example: A promoter of a Ponzi scheme wants to attract web surfers to a site where he advertises his scam. He places hidden text appropriate for a fan page of a popular music group on his page, hoping that the page will be listed as a fan site and receive many visits from music lovers. Older versions of indexing programs simply counted how often a keyword appeared, and used that to determine relevance levels. Most modern search engines have the ability to analyze a page for keyword stuffing and determine whether the frequency is consistent with other sites created specifically to attract search engine traffic. Also, large webpages are truncated, so that massive dictionary lists cannot be indexed on a single webpage.
- Hidden or invisible unrelated text
- Disguising keywords and phrases by making them the same color as the background, using a tiny font size, or hiding them within HTML code such as "no frame" sections, ALT attributes, zero-width/height DIVs, and "no script" sections. However, hidden text is not always spamdexing: it can also be used to enhance accessibility. People screening websites for a search-engine company might temporarily or permanently block an entire website for having invisible text on some webpages.
- Meta tag stuffing
- Repeating keywords in the Meta tags, and using meta keywords that are unrelated to the site's content. This tactic has been ineffective since 2005.
- "Gateway" or doorway pages
- Creating low-quality web pages that contain very little content but are instead stuffed with very similar keywords and phrases. They are designed to rank highly within the search results, but serve no purpose to visitors looking for information. A doorway page will generally have "click here to enter" on the page.
- Scraper sites
- Scraper sites, also known as Made for AdSense sites, are created using various programs designed to 'scrape' search-engine results pages or other sources of content and create 'content' for a website.[7] The specific presentation of content on these sites is unique, but is merely an amalgamation of content taken from other sources, often without permission. These types of websites are generally full of advertising (such as pay-per-click ads[7]), or redirect the user to other sites. It is even feasible for scraper sites to outrank original websites for their own information and organization names.
[edit] Link spam
Davison defines link spam (which he calls "nepotistic links") as "... links between pages that are present for reasons other than merit." [8] Link spam takes advantage of link-based ranking algorithms, such as Google's PageRank algorithm, which gives a higher ranking to a website the more other highly ranked websites link to it. These techniques also aim at influencing other link-based ranking techniques such as the HITS algorithm.
- Link farms
- Involves creating tightly-knit communities of pages referencing each other, also known humorously as mutual admiration societies[9]
- Hidden links
- Putting links where visitors will not see them in order to increase link popularity. Highlighted link text can help rank a webpage higher for matching that phrase.
- "Sybil attack"
- This is the forging of multiple identities for malicious intent, named after the famous multiple personality disorder patient "Sybil" (Shirley Ardell Mason). A spammer may create multiple web sites at different domain names that all link to each other, such as fake blogs known as spam blogs.
- Spam blogs
- Spam blogs, also known as splogs, are fake blogs created solely for spamming. They are similar in nature to link farms.
- Page hijacking
- This is achieved by creating a rogue copy of a popular website which shows contents similar to the original to a web crawler but redirects web surfers to unrelated or malicious websites.
- Buying expired domains
- Some link spammers monitor DNS records for domains that will expire soon, then buy them when they expire and replace the pages with links to their pages. See Domaining. However Google resets the link data on expired domains.
Some of these techniques may be applied for creating a Google bomb, this is, to cooperate with other users to boost the ranking of a particular page for a particular query.
- Cookie stuffing
- This involves placing an affiliate tracking cookie on a website visitor's computer without their knowledge, which will then generate revenue for the person doing the cookie stuffing. This not only generates fraudulent affiliate sales, but also has the potential to overwrite other affiliates' cookies, essentially stealing their legitimately earned commissions.
[edit] Using world-writable pages
Web sites that can be edited by users, such as Wikis, blogs that allow comments to be posted, etc. can be used to insert links to spam sites if the appropriate anti-spam measures are not taken.
- Spam in blogs
- This is the placing or solicitation of links randomly on other sites, placing a desired keyword into the hyperlinked text of the inbound link. Guest books, forums, blogs, and any site that accepts visitors' comments are particular targets and are often victims of drive-by spamming where automated software creates nonsense posts with links that are usually irrelevant and unwanted.
- Comment spam
- Comment spam is a form of link spam that has arisen in web pages that allow dynamic user editing such as wikis, blogs, and guestbooks. It can be problematic because agents can be written that automatically randomly select a user edited web page, such as a Wikipedia article, and add spamming links.[10]
- Wiki spam
- Using the open editability of wiki systems to place links from the wiki site to the spam site. The subject of the spam site is often unrelated to the wiki page where the link is added. In early 2005, Wikipedia implemented a default 'nofollow' value for the 'rel' HTML attribute. Links with this attribute are ignored by Google's PageRank algorithm. Forum and Wiki admins can use these to discourage Wiki spam.
- Referrer log spamming
- When someone accesses a web page, i.e. the referee, by following a link from another web page, i.e. the referrer, the referee is given the address of the referrer by the person's internet browser. Some websites have a referrer log which shows which pages link to that site. By having a robot randomly access many sites enough times, with a message or specific address given as the referrer, that message or internet address then appears in the referrer log of those sites that have referrer logs. Since some search engines base the importance of sites by the number of different sites linking to them, referrer-log spam may be used to increase the search engine rankings of the spammer's sites, by getting the referrer logs of many sites to link to them.
[edit] Other types of spamdexing
- Mirror websites
- Hosting of multiple websites all with conceptually similar content but using different URLs. Some search engines give a higher rank to results where the keyword searched for appears in the URL.
- URL redirection
- Taking the user to another page without his or her intervention, e.g. using META refresh tags, Flash, JavaScript, Java or Server side redirects
- Cloaking
- Cloaking refers to any of several means to serve a page to the search-engine spider that is different from that seen by human users. It can be an attempt to mislead search engines regarding the content on a particular web site. Cloaking, however, can also be used to ethically increase accessibility of a site to users with disabilities or provide human users with content that search engines aren't able to process or parse. It is also used to deliver content based on a user's location; Google itself uses IP delivery, a form of cloaking, to deliver results. Another form of cloaking is code swapping, i.e., optimizing a page for top ranking and then swapping another page in its place once a top ranking is achieved.
[edit] See also
- Adversarial information retrieval
- TrustRank
- Index (search engine) — overview of search engine indexing technology
[edit] References
- ^ SearchEngineLand, Danny Sullivan's video explanation of Search Engine Spam, October 2008 (accessed 2008-11-13)
- ^ a b c "Word Spy - spamdexing" (definition), March 2003, webpage:WordSpy-spamdexing.
- ^ a b Gyöngyi, Zoltán; Garcia-Molina, Hector (2005), "Web spam taxonomy", Proceedings of the First International Workshop on Adversarial Information Retrieval on the Web (AIRWeb), 2005 in The 14th International World Wide Web Conference (WWW 2005) May 10, (Tue)-14 (Sat), 2005, Nippon Convention Center (Makuhari Messe), Chiba, Japan., New York, NY: ACM Press, ISBN 1-59593-046-9
- ^ a b Ntoulas, Alexandros; Manasse, Mark; Najork, Marc; Fetterly, Dennis (2006), "Detecting Spam Web Pages through Content Analysis", The 15th International World Wide Web Conference (WWW 2006) May 23 - 26, 2006, Edinburgh, Scotland., New York, NY: ACM Press, ISBN 1-59593-323-9
- ^ Deconstructing Google bombs
- ^ A quick word about Googlebombs, January 2007
- ^ a b "Scraper sites, spam and Google" (tactics/motives), Googlerankings.com diagnostics, 2007, webpage: GR-SS.
- ^ Davison, Brian (2000), "Recognizing Nepotistic Links on the Web", AAAI-2000 workshop on Artificial Intelligence for Web Search, Boston: AAAI Press, pp. 23–28
- ^ [1]PDF (1.55 MiB)
- ^ Mishne, Gilad; David Carmel and Ronny Lempel (2005). "Blocking Blog Spam with Language Model Disagreement" (PDF). Proceedings of the First International Workshop on Adversarial Information Retrieval on the Web. Retrieved on 2007-10-24.
[edit] External links
[edit] To report spamdexed pages
- Found on Google search engine results
- Found on Yahoo! search engine results
[edit] Search engine help pages for webmasters
- Google's Webmaster Guidelines page
- Yahoo!'s Search Engine Indexing page
[edit] Other tools and information for webmasters
- AIRWeb series of workshops on Adversarial Information Retrieval on the Web
- Online tool that detects spam techniques on web pages
- A list of open proxy and bot IP's. Ban IP's on this list to prevent comment spam. Updated weekly.
- Protecting Your Wiki From Spam
|