Sunday, August 19, 2012

Dot "info" domains as a honeypot search engine spammers?


Dot info (. Info) domain TLD seems to be the new domain
search engines spammers since there is an apparent lack of
Google aging delay to list and classify them. They are indexed
relatively quickly after the first scan by search engines
and ranks well for some competitive terms. The sleaze
monsters from the search engines sp * The software is used for mmers
automate the four separate areas, the collection of content, the article
creation, article distribution and blog. Some may be
using all four techniques in concert in an attempt to blanket
hundreds of sites with content of the articles, in order to blow up
Google Adsense or Yahoo Publisher Network Ads.

Various types of thief goes on in this seamy belly
Automatic search engine sp * m. Recently, the contents of "pre-loaded"
sites are sold by a software developer with articles
built-in 150 sites covering areas for $ 100, or at
$ 10 for individual subjects, to the creation of "Adsense
"Ready article sites containing keywords focused content
categories obtained from "free-to-use" sites articles,
against the author clearly posted terms of use.

These terms of use submitted by authors and article
distribution sites generally prohibit the use of those
"Free-to-use" items in the compilation for a fee, membership sites
or any other "for-profit" collections. Some authors are expanding
their conditions of use to exclude the use of specific networks.
Previous slime merchants have avoided legal action by copyright
giving away those items with purchases of software for a fee. I prefer
be surprised if the authors have not found a way to join
to prosecute those who abuse their usage terms in this way.

The authors were concerned over "duplicate content penalties" when
their articles are distributed for use by other websites.
It is extremely unlikely that this type of use will lead to
sanctions for the author's website, linked by boxes of resources
of these articles of original content. The application likely
of duplicate content penalties is interesting if used
in exactly the same way as those purchasers incapable of
"Pre-loaded sites with a site design precisely duplicated
and precisely the same articles and RSS feeds that does not remain
vary. Those who use these mirror sites are those that
that will suffer because they are duplicate content penalty
mirror sites, which were filtered per year. Lazy
purchasers of "pre-loaded" sites will be the only items
receive penalties from search engines.

In another aspect of this strange slimy underground world of research
anti-spam engine crawlers article site collection to use IP spoofing
that mimic the IP addresses of search engines to hide
routine in traffic on those sites that crawl, trolling
the web looking for items to steal and use, and splogs
pre-loaded web site kits. These pages slowly hit crawlers
sitemaps or search the index pages of the authors, grab URL to return
later, under different IP and pound away at 10 pages per
seconds or more, the items seized by important sites against
terms of use posted on these sites. The crawlers usually
belong to hosting services and then sell the stolen content
Subscribers to the site content automated article after execution
through new article regurgitating software.

This article sleazy software product theft, which is
copyrighted articles already written by other authors,
reorder points, swap out verbs interchangeably,
rearranges sentences and spits out a quite readable and
sometimes passable article that can not be recognizable
original authors. These stolen items are regurgitated
then subjected to the banks and articles distribution sites
splog creator, sometimes using automatic submission software
or hosted services, for which those stolen, articles regurgitated
are used throughout the web to create inbound links that lead to
search engine sp * m sites.

Many of these owners. Domain information using sleazy blog sp * m
software to create what has become known as "splogs", which
Use multiple blogging platforms to create automatically
blog updated with posts made regularly in a certain period of time random
sequencing. They do this to appear active bloggers,
using automation integrated into their software, to create keywords
targeted messages through RSS feeds from the keyword phrase
news search centered and then "ping" the blog search
new engines with automated messages. Second
sophistication of the splog owner, you will often see footer
links that lead to other splogs operate separately
topics.

Virtually all domains. Information I have seen in the top ranking
results for competitive phrases are entirely Adsense or YPN
sites, including splogs, full of news automatically generated RSS
RSS and On-the-fly title tags and H1 tags generated by
the search phrase used to find the site. Also
copyright information in the footer of some of these sites is
generated on-the-fly to match the search query. While this
technique is also used by some search engine sp * mming
Com Websites (more than 1 year after its creation to avoid aging
delay) can be seen in more. info domains now.

If Google is really ranking sites based on clickstream data,
imagine the abuse of these spam sites dynamic, full of nothing
but the RSS feed or stolen, regurgitated content may lay eggs!
Soon he would have ruled the pages of the results because they reflect
EXACTLY the search terms used by researchers, which leads
the top level, click-throughs, which generates more
rankings. I see a hole for serious abuses here and I hope that
his Ph.D. at Google develop a filter for the fast technique.

This exact idea of ​​the destination page is widely used in game
Pay-per-click campaigns as the most highly trained specialists SEM
recommend landing pages that reflect exact matches for the user
clicking it leads to higher conversion ratios. Perhaps
a programmer who spends his days creating PPC landing page
script is spending his nights creating. info domains with
Dynamic title and meta data for competitive research
Phrases to govern organic SEO?

Of course, ownership information is masked by many whois
recent. info domain owners, since these domains were
purchased specifically for se-sp * mming sites. When looking up
domain information whois information highly ranked. to
control the creation (purchase) date, you will see a preponderance
October and December 2005 dates of creation, with a
a smattering of sites created in January 2006 for those well
classified as splogs. This should be about the time that spammers
Forum began to notice and discuss the lack of aging
info domains for the delay.

Whois information for dot com (. Com) ranking sites for
Research shows that everyone is more competitive than a year and
Most are 3 to 5 years from the date of creation.

This suggests clear algorithmic filters used for aging
all domains * except *. info domains and the apparent lack of
. Information filtering, allowing bypassing the so-called "sandbox
effect "that delays the indexing and positioning of other TLDs. My
belief is that Google is using this delay and lack of aging
the lack of filtering of information. domains as a honeypot for research
engine sp * m to collect the bad guys all in one otherwise
rarely used TLD and then run wide sweeps, tracing their
tactics to further filter (forgive me for using the term)
Black Hat SEO techniques.

January 20, 2006 by Mike Banks Valentine ......

No comments:

Post a Comment