Archive for the 'SEO' Category

This guy will ask spammers to stop spamming you in exchange for links!

The beauty just came into my blog’s spam filter this morning:

Name: Anikrichard | E-mail: | URI: *spamlink* | IP: 72.9.235.218 | Date: September 5, 2007

hello , my name is Richard and I know you get a lot of spammy comments ,
I can help you with this problem . I know a lot of spammers and I will ask them not to post on your site. It will reduce the volume of spam by 30-50% .In return Id like to ask you to put a link to my site on the index page of your site. The link will be small and your visitors will hardly notice it , its just done for higher rankings in search engines. Contact me icq or write me , i will give you my site url and you will give me yours if you are interested. thank you

This is one of the more appealing spam emails I’ve seen - in return for him doing you a favour, and asking the nasty spammers to stop, all he wants is a link to his site.

I don’t have the time myself, but someone should talk to this guy and get his URL to ‘add a link’, then send him off the the Google Webspam team. Would be fun to see what happens. If anyone wants to take it on, let me know - I would love to see how it turns out / how quickly the guy is banned.

Politics and Web 2.0

I just want to throw this out there quickly.

Leadershipmatters.caI just came across the ‘Leadership Matters’ campaign which Ontario PC Candidate John Tory has launched in preparation for the 2007 Ontario General Election. Although there has been a lot of focus on the use of social media by the 2008 Presidential Candidates, I think they could learn a bit from what Tory is doing here!

Check out the site, I am curious what your impressions of it are!

Also, keep your eyes on the blog for a mega-post on Ask.com and its new features in the next couple of days!

In the era of Web2.0, we are seeing the advent of AJAX technologies to drive web pages and applications. Which is good, for the most part - AJAX is a great tool. AJAX-powered pages and applications are often quicker and more responsive than their predecessors, since they don’t require full page loads for every possible operation, opting instead to reload and refresh only those parts of a page necessary.

On the flip side, AJAX can cause problems for search engines. Since the browser does not necessarily move to a new URL to display new data, much of the data and text content on an AJAX-powered site may not be accessible to search engines. Search engines will not submit forms or otherwise interact with the AJAX-powered sections of a site to cause page updates to be triggered, causing much of the content of an AJAX-powered site to be missed.

This is an issue for SEOs everywhere - as AJAX becomes more prevalent, client demand for this technology will continue to increase. So how do we deal with AJAX from an SEO perspective?

Rich McIver of SoftwareDeveloper.com emailed me yesterday to inform me of their latest feature article entitled, ‘‘. The article goes into some detail over techniques and best practices to help make your AJAX application search engine friendly. Some of the information is pretty standard (’submit a sitemap containing static copies of the fragments returned through AJAX’), while some of it deals with the problem at a more design / technical level, such as a presentation on ‘Hijax’ model of AJAX web design.

The article also goes into some common ways of handling AJAX for SEO purposes, and shows the shortcomings of these approaches.

How have you handled AJAX in your web pages / applications? Anything you would add beyond what the linked articles suggest?

Increasing the Profile of the SEO Industry

A recent post at SEOmoz once again brought up consider the plight of the Search Engine Optimization industry; as effective and needed as SEO consultants are, blackhats and webspammers have given the industry a black eye.

People see the search community in two camps: Search Engines (good) and Spammers/SEOs (bad). The way they see it, the search engines work diligently to reduce spam, and show the most relevant results for their queries. On the other side of town you have the SEOs and spammers, who try to make sites rank for their own ends, therefore throwing off the good, pristine search engine results.

What the public needs to realize is that real SEO isn’t about making pillspam or other useless garbage rank - SEO is about ensuring that relevant content ranks in the SERPS for related queries. For most site owners, there is little or no value in ranking for non-relevant queries. Often, the pursuit of rankings by SEOs forces them to review the content, make it more relevant and of better quality to induce links, and all around creates better websites and an overall better user experience on the Internet.

I think the SEO community needs to reach out to the public in some way to raise our profile in the public eye, differentiating ourselves from the communities of webspammers and other devious characters. We need to present ourselves as a legitimate, valuable industry. I think that we are on the right track as far as it goes, but more has to be done.

Does anyone have any suggestions how to better clean up the SEO image? I would appreciate your thoughts.

.cm Registry Redirection Mystery Solved

A while back, I first commented on the fact that the entire .cm (Cameroon) TLD appears to be redirected to the Agoga.com parking page in my post ‘Typo Squatter loses Thousands of Dollars Due to Missed Details‘. This post generated tons of interest and comments, and a large amount of search traffic.

The move by the government of Cameroon has become an ongoing mystery for many domainers - few could fail to see the benefitsof owning the .cm registry. Think of the profits - any time you mistyped ‘.com’ as ‘.cm’, you went to an ad-driven parking page. It is estimated that millions of people make this sort of error every day, leading to hundreds of thousands of dollars of ad revenue daily. In essence, whoever redirected the .cm registry pulled off the perfect coup of the domaining world.

Well, the mystery is finally solved. According to a recent article on CNN, the mastermind behind the .cm switch is Kevin Ham, domainer extrordinaire with an estimated net worth of $300 million dollars. According to the CNN article, Ham sent a group of his employees flying to the nation of Cameroon to convince the government-run registry the value of landing-page redirection. Ham splits his profits off this scheme with the government of Cameroon.

According to sources, he is also eyeing Colombia (.co), Oman (.om), Niger (.ne), and Ethiopia (.et) for the same type of agreement he has with Cameroon.

Publishing via RSS waives your copyright

Or so it was claimed to me the other day. Not necessarily your entire copyright, but certain protections within it.

I was speaking with some other webmasters the other day when the issue of scraping other site’s RSS feeds for content was brought up. We got to talking about the legality of this. I personally was surprised at some of the arguments put forward justifying, if only in a legal sense, the scraping of RSS feeds.

Since you are publishing your content via a public syndication channel, your RSS feed, you waive your right to disallow republication. In effect, you are giving away your content for people to do what they want with it, almost as if you had published it under a Creative Commons license.

Obviously, that is an undesirable interpretation of the purpose of RSS feeds - if it were so simple, I would cut it off entirely. Unfortunately, 90% of my regular readers are reading via RSS. If I were to stop publishing my RSS feed, I would likely lose a large precentage of my audience. And this interpretation must be wrong on another level - I am quite sure I would get sued if I were to take the content off of the feed of any big-name blog and republish it in a book. I doubt I would win.

So certain rights must be retained still when publishing an RSS feed containing your content.

Let’s take a look at this from a legal perspective.

When you are accessing the RSS feed on a website, you naturally would assume that there is a certain implied license to the data. It would be fair to assume (and legally defensible) that the owner who made the feed available intended that you be able to add the feed to Bloglines, Google Reader, or another similar aggregator.

This is where the implied license idea breaks down. A scraper could easily claim that he is ‘another similar aggregator’. He is doing a fairly similar job to Bloglines and Google Reader; he is taking your data, and aggregating it in a single place for readers. His layout may not be useable, and the purpose of the site may only be to raise Adsense money, but in effect, when using your data, he is considering himself just another aggregator.

This means that the implied license idea does virtually nothing to protect your data from aggregators, even if you do not yourself want them to use your data.

So how can we make undesirable resyndication illegal?

The best recourse currently available is to specify on your blog what rights or license is granted to the content within your feed. As to where this license must be placed, well, that is another question. Should it be on a seperate page? Under the ‘Syndicate’ link? Within the feed itself? This in itself could have legal implications - the legal best method would be to make someone explicitly agree to your terms via a double-confirmation form prior to accessing your feed, or even learning the location of it.

Any license should contain an explicit, enumerated list describing who is allowed to access your feed, for what purpose, and under exactly what conditions they may use your content. Do not leave any grey areas.

So what? People are still going to scrape my content!

Undoubtedly. However, depending on how important it is to you, this will go a long ways to firming up your legal footing in case you decide to take action against unauthorized resyndication. I would hope that people take this matter seriously - every time someone scrapes your content, you run the risk of losing ranking for the terms contained within the content. While Google does a certain amount to filter out these ‘webspam’ sites, they aren’t perfect.

Related Link: ghostwriter ought to be careful, as RSS feeds can waive your copyright.

Increases in Referral Spam

Has anyone else noticed an increase in referral spam in the last two weeks? I am seeing almost 500 referral spam hits per day, with more somedays. This is particularily annoying to me since I am using Dax’s referral RSS feed in order to keep an eye on my referrals… this recent trend is making that more or less useless now!

I wouldn’t mind some input from comment spammers - how effective is comment spamming? What kind of a return do you get on it?

At any rate, back to your regular scheduled programming….

Next Page »