Archive for the 'Traffic Building' Category

Contest Linkbaiting - Win Yourself A Link and a Coconut

I just want to take a second and pass along one of the coolest linkbait ideas I’ve seen in a long time. Erik Vossman of BlogtownPress contacted me today to inform me of a contest they were running to promote the launch of the BlogtownPress blogging network.

Erik pointed me to the ‘Link to a Coconut’ contest. Here’s the deal: Erik has picked 3 random posts from the blogs in the BlogtownPress network. To enter, you may post links to any posts within the network, up to a maximum of 3 links per post of your own. When the contest finishes, Eric will pick a random trackback from each of the three BlogtownPress network posts to win.

The prizes for the contest are what make the contest unique - each winner recieves a coconut straight from Hawaii, where the BlogtownPress headquarters are located. The coconut will be shipped to the winners free of charge. Additionally, the winners will each recieve a link on Erik’s Blog.

This is a piece of unique linkbait - each entry in the contest will net 3 links to the network. This will build up a large number of links to the new blogs in relatively short order. Of course, the only way to make something like this really succeed is if the prizes are unique, and somewhat valuable. The link is not the motivation here - it’s all about the coconut. Who wouldn’t want their own custom coconut shipped to them from Hawaii?

Well, I guess I will play along on this one - I want that coconut. So, my three picks are:

  1. Announcing Blogtown Press from Erik Vossman’s personal blog
  2. 3 Kinds of Successful Bloggers from Blogging on Empty
  3. An Inconvenient Truth from Fueling the World

So there you go - check out the contest yourself!

Typo Squatter loses Thousands of Dollars Due to Missed Details

Update: the mystery is finally solved

Setting

This yesterday, I mistyped the URL as I was visiting Google this morning; I accidentally typed http://www.google.cm. This redirected me to a page on the domain of http://www.agoga.com, which actually looked like a somewhat convincing, spartan page, very similar in style to what you would often see if your browser. Except that it also contained a search bar, and a few unobtrusive links to subject like ‘Travel’, ‘Cars’ etc., the kind of subjects you would see on a typical parked domain page.

I thought that was kind of interesting, a way of monetizing typos that looked to me at least like it would be somewhat effective way of squatting a typo. At the time, though, it didn’t seem noteworthy enough to me to give it further thought.

A little later, I was trying to get to Paypal, and again I accidentally typed http://www.paypal.cm. Once again I was at the same page. I was intrigued, and began experimenting by checking a variety of other domains with the .cm extension. Many big names in the industry had the .cm TLD pointed to the same page I had viewed earlier.

That also, is not that notable. A squatter could easily have registered a whole variety of company names in that TLD - it’s done all the time, and is considered a valid tactic for making some money off of parked domains.

What made it notable finally is when I started entering random domains, and sequences of characters in the .cm TLD. such as http://sdfjhksd.cm and http://www.oiyt.cm. These also are pointing to a landing page on agoga.com, albeit a different landing page from the ones used on major domain mispellings.

Agoga.com has every unregistered .cm TLD pointed to their landing pages!

While there are a bunch of legitimately registered .cm sites which resolve elsewhere, any other .cm domain, whether nonsense characters or misspellings of ‘real’ domain names resolves to the same IP address which is a cluster at agoga.com. The only way this could be accomplished is to change the default site settings of the master DNS serving the .cm TLD. Agoga must have either hacked the .cm registrar in Cameroon, or paid the registrar off for this. Either way, I suspect something illegal has occurred here; I doubt this type of redirecting is approved by IANA.

Agoga Alexa Graph

Opportunity

How much type-in traffic would you think would be generated by people misspelling .com as .cm? Agoga.com has an Alexa Rank of 6,915 which indicates thousands or tens of thousands of visitors per day by some estimates. Keep in mind that this site has not been running for even three months yet; today’s Alexa rank was 2,913.

Since Alexa ranking is biased towards a technical crowd, I think it is safe to assume that the true numbers are fairly large. Now, it is easily attainable that a proper landing page optimized for Pay-Per-Click advertisements will result in a 30%-40% click-through-rate. Especially if one was to put some effort into ensuring the advertisements were targeted around the domain name or keywords at the similar .com page.

It is obvious that with this type of traffic, Agoga.com could be pulling in some huge advertising revenue - as much a $1000-$2000 per day. They should have it made in the shade, for all intents and purposes. But, they have screwed up royally.

How did they screw up?

Agoga will return you to one of two landing pages, depending on what type of domain you enter. One version, which they seem to use when squatting the domain of a large company or popular website, can be seen at . The other, which they seem to use for the domains of smaller websites and nonsense or misspelled domains can be seen at http://www.oeiurt.cm (note the random domain name…) or http://www.caydel.cm (a typo of this domain) or at the Agoga main page at http://www.agoga.com.

The first type of landing page is broken - The first type of landing page is relatively well done - it is minimal, and could easily get the user to click onto their main site. The problem lies in that no ads are served if the user enters certain search queries. While an advertising page is shown if the user enters a query such as ‘digital cameras’, ‘dvd’, ‘knitting’, other queries such as ‘infohatter’, ‘caydel’ or whatever return nothing. Sure, probably nobody is bidding on that term; wouldn’t it be a better plan to grab the first result from a Google query for that term, scrape it for keywords, and return ads based on that? Potentially millions of long-tail opportunities are being missed here, thrown away for no good reason.

The second type of landing page broken - The script that Agoga used to generate the second style of landing page is broken. Any search query or link click redirects you to the same page you just left, with a nice photo of a mountain range, or other scenery visible in place of the advertisements that should be shown. They are making nothing from this type of landing page; in fact, they are losing money due to bandwidth costs.

Opportunity Missed

I would be willing to bet that the majority of the traffic that Agoga.com receives will end up at the second landing page, the broken one. While they probably have their highest traffic domains such as http://www.google.cm pointing to their ‘working’ script, they are missing out on the whole long-tail of domain misspellings. Think about it this way - any mistake made by anyone anywhere when he misspells .com as .cm will send him to the broken script. This could be anyone typing in one of a billion domains.

Additionally, a fair number of people who misspell the the domains of large sites such as Google will make multiple mistakes - they may mispell google.com as google.cm, but how many are prone to make multiple mistakes such as gogle.cm or googel.cm and be sent to the broken page?

What Are You Trying to Tell Us Here?

The point of what I am trying to say should have become clear by this point, but I will write it out nice and neat anyways: an neglect of details can lose you a lot of money. I do not know if this second landing page has ever actually worked for Agoga. Perhaps it has, and only stopped working 15 minutes before I stumbled upon it the first time. Perhaps it has never worked. The fact of the matter is, the person or persons who own Agoga.com (Whois data indicates Nameview, Inc, BTW) are losing thousands of dollars per day. It is probably safe to assume that they don’t even realize this; if they did, they would fix it in realtively short order.
The people responsible for this had an amazing idea, which they ran with 90% of the way to the perfect money-making opportunity. But they have missed a few small details which are costing them perhaps thousands of dollars per day. If they were to fix these small problems, they could probably nearly double their income.

I appreciate your comments and feedback!

Unexpected Results of Technorati Inclusion

As I wrote earlier, I have been re-included back into Technorati, which is great. I am getting lots of traffic from them, and a few comments and links I otherwise wouldn’t have. I’ve also noticed another major surprise - autoblogs are now grabbing my posts from Technorati tag RSS feeds, which may lead to duplicate content and link devaluation problems.

The Good

Well, I may as well start on a positive note. By picking up my posts from Technorati, these auto blogs (linked examples) are giving me a bunch of backlinks I otherwise wouldn’t have had. Additionally, I have been getting the odd bit of traffic from these blogs, although people coming from these blogs don’t always seem to stick around. There are some up sides to the fact that I am getting syndicated all over creation.

The Bad

Of course, it is a bit of a downside that many of the auto-blogs are syndicating my content without any attribution of authorship, or anything to note that these are not original. This annoys me - I don’t care if people quote me to high heaven in their posts. Or, even quote the post whole-sale. But most real people have the courtesy to attribute what they borrowed from me. These auto blogs don’t even do that.

The Ugly

And wait, it gets worse. I am wondering to a certain degree how this will interact with the Google duplicate content filter. From what I know of the dupe filter, Google assumes that the first place they crawl containing a certain chunk of textual content is the proper owner. In these days of RSS feeds, and tag-searching, I have found copies of my posts on these auto blogs within 30 seconds of my posting them to my own blog. What would happen if they get crawled on one of these autoblogs first, prior to my blog being crawled? Would Google attribute to them the authorship, and leave me in the cold?

Conclusion
I am sure Google is smart enough to recognize spam blogs quite effectively, but I wouldn’t doubt that there is still some level of risk inherent in the process. Additionally, if we think about the situation in terms of link building, overall incoming link quality plays a large role in how much Google trusts your site*. Obviously, if you had a site referenced by 10 .edu sites out of 12 incoming inks total, you would probably be trusted more by Google than if yo had 10 .edu links out of 2,000 links total. The value of your incoming .edu backlinks is now more diluted by the vast mass of your link weight, and you have a lower average quality of your incoming links.

So, I am not sure what to think about this auto-blog copying issue. I would assume that everyone associated with Technorati has the same problems, whether they recognize them or not. Thoughts, anyone?

* Yes, I know. It’s a debatable subject in whether incoming link quality plays a role in whether Google trusts you. I personally think it does, so I am sticking with this viewpoint. Hate mail into the comment form, please!

Text Link Ads Launches A New Link Baiting Service - What is it Worth?

Text Links Ads, one of the premiere companies in the link-sales industry has now launched a new Link Baiting service. In short, they offer two plans, one at the $5000 level, and one which will cost you $10,000. With both plans, they create a link bait item, submit it to the major social media sites, and mail them to appropriate bloggers. The $10,000 plan also gives you additional creative ideas in addition to the idea for the link bait, they submit to a wider range of social media sites, email twice as many bloggers, and, if possible, submit your site to CSS galleries.

Patrick Gavin and Andy Hagens are two of the top names in the Link-Building field, and the price of this service reflects that. Would a service like this be worth the high cost? This is cheap in comparison to some of the other link baiting services available, but I can hardly believe that this is worth $10,000. Let’s be honest, the major time is spent in coming up with the idea. If you are somewhat skilled at crafting your own headlines and summaries, you could do the submissions yourself, in addition to emailing relevant bloggers.

So, is the idea worth $10,000 to you? I would appreciate hearing from anyone who has purchased a link baiting service in the past. Tell me how well it worked! I want to know hat type of ROI you think you received from this service! 

Technorati Took Me Back Again…

After nearly 3 months of constant rejection from Technorati, I have been taken back again. An hour or so, I noticed the Technorati Spider was travelling my site, so I checked the Technorati page for this blog, and sure enough, I was indexed again! I still haven’t heard back from Technorati supports about any of the support tickets I’ve submitted, but hey - I’ve never been one to look a gift horse in the mouth. This is really nice, since I’ve already seen traffic coming in from Technorati as well.

I guess I now have to take back some of the mean things I’ve said about Technorati with respect to this issue. I’m back in black…

Popular Blogging Platforms may Suffer Search Engine Penalties

Do blogs suffer from duplicate content penalties in major search engines? A few days ago, this thought struck me as I was beginning to rework the look and feel of this blog - with posts showing up in as many as four different locations on the blog, was there any reason to think Google, Yahoo!, MSN, and other search engines may actually be penalizing my blog? The thoughts were further brought to the front when Barry Schwartz made a post on Search Engine Roundtable pointing to a thread on

Let’s consider the Wordpress blogging platform for now, although this should hold true through any other blogging platforms, and indeed, a variety of Content Management Systems. Looking at one of my previous posts, a list of SEO resources, you will find it at a variety of locations:

  1. It’s Permalink page: http://www.infohatter.com/blog/creating-a-list-of-seo-resources/
  2. Second page of the “Front Page’: http://www.infohatter.com/blog/page/2/
  3. Category for Advertising: http://www.infohatter.com/blog/category/advertising/
  4. Category for Link Building: http://www.infohatter.com/blog/category/link-building/
  5. Category for SEO: http://www.infohatter.com/blog/category/seo/
  6. More categories…
  7. Archives for August 2006: http://www.infohatter.com/blog/2006/08/

So as you can see, this text is replicated fully in as many as 10 different locations on my blog. What we are looking at here is a conflict between user-friendliness, and search-friendliness. This is an ideal setup by accessibility standards - the more ways you provide to access a piece of information, the more user friendly it is. But does this affect indexing and ranking in the search engines for this post?

When I perform a , I see that the single post page is the first result shown. So this is good - it means that when limited only to my site, Google is ranking the single-post page at the top, which is what I want to see. But are the rankings affected when the somebody searches the entire index of Google? Do the other copies of this post on this blog perhaps cause it to show lower than it would if it was only available in one place?

I think it does. In the past, Google has typically penalized dupe content hard, often sending sites to the supplemental index for such an offense. So, are all bloggers getting the same type of penalization?

What can be done about this? One of the first solutions to jump out at me would be to include on the category and archive pages; this would ensure that only the front page and the single post pages are indexed.

I would appreciate any thought or comments on this matter - this is something that should concern all bloggers. Perhaps this may be having a very noticeable impact on readership levels? Either way, it is something that could bear some serious thought.

Digg This Post!

Trusted Wikipedia and AboutUs.org Links!

I just read an interesting post by Andy Hagens called ‘Four Trusted Links You Can Build Today‘. I have a few comments on the article I thought I would share (lucky you!)

In his post, Andy writes,

“A lesser-known Wikipedia page: Do you have an investment-related site? Do not try to add your homepage link to en.wikipedia.org/wiki/Investment or en.wikipedia.org/wiki/Stock. Instead, add the deep link to your “The Forward P/E Ratio Explained” page from http://en.wikipedia.org/wiki/PE_ratio… it’ll have a much better chance of still being there tomorrow.”

Now, I have some good and some bad things to say about this.

The Bad: First of all, any one who spams their link to Wikipedia merely for the sake of the link should be stoned. And I don’t mean in the nice, familiar Western way. I mean with rocks. Really, really large rocks.

The Good: That said, this can be a really effective link building method, if your sites contain quality content. Again, if you are merely spamming links, see above.

When I originally started looking at this method some time ago, I came across some interesting realizations - many of the ‘deeper’ subjects are not covered. Just this evening for instance, I was trying to find ways to build Wikipedia links to a site of mine, when I realized that a number of subjects and topics I covered in the site DID NOT HAVE EXISTING WIKIPEDIA PAGES.

So, I did whatever any web designer would do in that situation - I created the pages. I wrote some good, high quality content for the Wikipedia articles. Obviously, they were subjects I was already interested in, since I had created web pages and complete sites around some of these subjects.

So, in essence, I have a bunch of Wikipedia articles which now contain my page among very few others.

Regardless of how you get your links into Wikipedia, there are a few methods to ‘pimp’ out your Wikipedia links so that they pass on the most linkjuice possible.

  1. Interlink the pages - In short, search Wikipedia for all instances and mentions of the subject on which the article containing the links to your page, and link them to the page in question. This, to some extent, raises the profile of the article on the Wikipedia domain. It is a well known fact that Google includes internal links as well as external links when determining how important pages are to a site.
  2. Maintain the pages - As with any web page, the more regular the updates, the more often the pages get spidered. Staleness of a page may be a major factor in Google’s algorithm, although there is some debate on the fact. At any rate, by making constant contributions to the article, and constantly improving the quality of the page, you will gain a site rep, and your changes are less likely to be immediately reverted.

Another site I noticed that is an easy mark for a good, albeit nofollowed link is AboutUs.org. This new site is gaining popularity recently, and I have begun noticing it linked to from the Domaintools.com tools. It is actually really interesting - it immediately will grab a site thumbnail, an excerpt, and isolates contact information, maps it with Google Maps, performs a bunch of other interesting feats, all in a great MediaWiki format. For a good example, check out the AboutUs.org page for Oilman’s blog.

Let me know what you think!

« Previous Page — Next Page »