Win £10,000 in a Free to Enter and Play Stockmarket Trading Game.

I have started playing “the Big Deal” stockmarket trading game. Come and join in for a chance to win some of the £30,000 prize fund!

* Free to join and play
* £10,000 for the winning individual
* £10,000 for the winning team
* £10,000 in other prizes along the way.

Screen Shot 2014-01-13 at 22.32.41

The game is run by a big online trading company – so you by in “virtual” currency, but based on real time share dealing!

As I’ll also get an award for signing a few of you up, come and join me!

[si-contact-form form=’1′]

Just fill in the form above and I’ll send over the invitation. Please use the email address you plan to sign up with. If you don’t join, the email address will be discarded.

Has Twitter peaked for SEOs?

I recall Rand Fishkin once saying at an SMX Advanced that “Twitter was not useful as a marketing tool”. It was a long time ago and I hadn’t heard of Twitter at the time, but by the next Pubcon, Vanessa Fox was tweeting from the stage and  in the SEO industry Twitter has been a mainstay of communication within the industry ever since. Certainly it was one of the most influential Marketing Towers for Majestic SEO over the years. But is that time over?

I can say from first hand experience that Twitter has been a foundation stone for  digital marketing for at least 5 years now, as I write in November 2013. But I think there may be change on the horizon and unless Twitter changes with it, they’ll get left behind.

I have always connected different parts of my media strategy together. I initially used Social Oomph – which I still recommend as a cost effective way of streamlining one’s social media marketing, before upgrading for a while to Hootsuite, which was stringer but a bit too confusing for me… and I don’t like the lack of trend data it their reporting. I have currently settled on Sendible – a British social media monitoring tool which I commend to the house.

I don’t believe I over optimize automation. If I do a blog post on Majestic, then systems do make sure that I do not forget to mention the blog post on Twitter and LinkedIN. LinkedIN then automatically duplicates the post on Facebook and before you know it, something like 50,000 SEOs are exposed to the message.

I have mostly avoided talking anything but shop on Twitter, which has helped me be on all sorts of SEO Lists, and other SEOs choose to trust certain lists and in the process re-tweet much of the content.

But the ease of automation is becoming Twitters downfall. Here’s an example from the weekend:

None of the above handles are connected with me. As far as I know, they are real people with real opinions… but that screenshot does look somewhat suspicious. This link goes to a 404… and before I deleted the post it was literally a one liner saying “newlsetter planned”. You see, I am also a great believer in having a marketing “message calender”. Having the company, the Ambassadors, the social network accounts all having a combined message which is linked has definitely leveraged my efforts over the years. It’s a but of an aside, but I have regularly talked about this in the past:

The problem in this instance was that I am always testing new ways to work and we were looking at using our WordPress back end to manage our messaging. Whilst talking through the plans with Majestic’s Campaigns manager I was demonstrating the idea and suggested that the Newsletter was due out around the 25th of the month, so let’s mark that in the calendar, so we can see the dots connected.

When the day came, WordPress published our Placeholder – a one line “post” with no content at all. I guess if I had been thinking clearly on a Sunday afternoon I would have added the content of the newsletter to the post, but instead I took it down as soon as I saw it.

But automation had already kicked in, and the reduced value of Twitter compared to five years ago suddenly became clear. There were already dozens of re-tweets for the post, with almost none of the Tweeters taking the time to read the (lack of) content on the page before Tweeting. I am very grateful for the re-tweets – but we have to ask ourselves whether the value of a re-tweet is little more than the value of a banner add these days if all measures of “engagement” are in fact illusory?

Should we be measuring engagement by re-tweets? Especially retweets going through third party link shorteners? I decided to go back to my log files and find out. Over the weekend, apart from this inopportune blog post/Tweet sequence, we also in fact DID send out the monthly newsletter and of course the sequence also resulted in comments on Facebook as well. Twitter didn’t fair well…

Now don’t get me wrong – when Majestic HAS got something to say, then the guys at Twitter are really helpful in promoting the message to the SEO world at the speed of light, but it takes a bit more joined up thinking than a simple automated Tweet from a social media marketing tool to get real traction. The trick is not the amount of re-tweets, but WHO tweets.

There’s better ways to Measure the Power of a re-Tweet

The person that ReTweets is MUCH more important than the number of re-tweets. It turns out, as well, that the number of followers is also a poor measure. Followers are cheap and sometimes fake. More importantly, even Lady Gags’s followers may be real, but they are not focused to my world.

But at least we know that Lady gaga IS influential on Twitter. There are two extremely useful ways to see this:

1: Don’t discard Klout

Klout is probably the best known social media measurement tool out there. I hear a few negative comments from some SEOs about the score, but it’s a big challenge trying to evaluate the influence of everyone on the internet being able to give you a score between 0 and 100 instantly and from what I see, their scores are unerringly accurate considering the scale they need to operate at. They combine several social networks and the scores are powerful and (for the most part) free.

2: Don’t forget Majestic SEO

Majestic can give you a score for any Twitter profile, Linkedin profile or any other profile that has a publicly accessible link. Again it is a score between 0 and 100 and it is the “Trust Flow” score that I would urge you to look at. If a person is influential on a given profile, then people will ultimately link to the profile. That increases the Trust Flow and turns out to be a very useful way to assess the influence of an individual on any given social network, forum or blog.  Now you can compare two profiles via thanks to an enterprosing developer called John Doyle which I recommend trying and if you simply want a list of the top 50,000 most influential Twitter profiles, Majestic has given the list away recently after some research for a Forbes article.

Has Twitter Peaked For SEOs?

So let’s get back to the subject at hand. I do think that SEOs in particular have become largely “Link shy” on Twitter. They see something and may re-tweet it, but they are much more likely to re-tweet a link that look at it! I know I have occasionally been guilty of that, although not as much as many. My crime has been Tweeting  by Rotation – getting a message out even when the message was created by mistake. At least as a result it has led to some insight into what is happening on Twitter. Hopefully I can take that knowledge and become a better marketer moving forward.

Random Domain Generator for Excel

Ever needed a list of Random Domains in an Excel Spreadsheet? If so, feel free to use mine in this post… just thank when you use it in a blog post please.

Recently I have been seeing innumerable SEO posts that claim to be scientific, then show you data either for a single site and suggest this is fact or worse – for sites that the author won’t name and pose as fact.

If you plan to put up research:

  1. you need to let the research be open to peer scrutiny. The results need to be replicable.
  2. Your research is only good if your data is unbiased. Finding a random set of domains is remarkably hard to do.
I found a site that creates random words, but this is only part of the challenge. I then created a list of valid TLDs. Then I took a random word and attached a random TLD. My TLD list is just the common ones for me – but you can add more. Only thing I suggest is not to use non English speaking countries with a list of English random words.
The resulting Excel file shows you an example of creating 100 random domains from a list of 2700 or so random words. You can always play on the theme if you want – but if you ever see someone using a biased list of sites for some research… please send them over to here in the comments to get them to do the test again with something approaching objectivity would you?
Now this is not TOTALLY random, but should create an unbiased sample set for most SEO testing.

Download the Excel random domain generator file.


A Global Twitter Strategy?

I just noticed Twitter has a load of new (to me) suddomains:

But not (as yet):


So What’s Happening Here?

Ay the moment, the content on each of these looks identical to me, but Twitter must be planning a bit of a roll out of something which – I can only assume – will mean Twitter streams differ by more factors than just timeliness. Facebook has EdgeRank and LinkedIN have their own algprithm (which SEOs would be wise to analyse). So Twitter may be wanting to shake things up a bit.

How Might That Impact your own International Strategy?

This may or may not put a hole in the planned strategy I was about to deploy for Majestic and would love feedback on opinions. As many will know, Majestic is now in three languages and it’s not a secret that we have a few other Ambassadors fighting our corner abroad. The Blog has now similarly started taking on an Intermational feel, and the result has been @MajesticSEO Tweeting in German one minute and English the next. The Italians want their share and the Brazilians are about to rule the world… so surely I need to adapt the Tweets or alienate the English speakers.

I WAS thinking that I should set up handles on Twitter for each language and… instead of suddenly letting a Tweet slip out on @MajesticSEO in a foreign language, rather have a planned strategy that Tweets every (say) 19 hours saying: Follow us in YOUR language: URLA, URLB, URLC, … etc. Thus we move users towards their language over a period of time.

Anyone got any thoughts or success stories on Tweeting multilingually? I don’t think Americans will warm to non-English Tweets over time.



I was honoured last night. Literally!

UK Search Personalty of the Year Award

I was given what I consider to be a great honour – UK Search Personality of the Year. Now as women will testify, I think my “personality” has limitations in many areas – but when it comes to search, I do seem to get around! I absolutely had no idea that I had even been nominated, so it was fortunate that I was still just about sober enough at the Black Tie event to collect the award! Thank you truly to the UK Search industry for the recognition, it really is a pinnacle of success for me to date. I know it is in some ways a closed industry – but anyone is welcome and there was a huge attendance at the awards.

I can also now hold my head up high when I am with Alex – the founder of Majestic SEO – who won the European Search Personality of the Year earlier in 2012. Majestic SEO has pretty much had a clean sweep and the agency I helped found in 1999 – Receptional – has also done well:

More Search Awards

The four awards above are:

  • UK Search Personality of the Year 2012: Dixon Jones (MajesticSEO / Receptional)
  • European Search Personality of the Years 2012: Alex Chunovsky (MajesticSEO)
  • European Best SEO Software 2012: Majestic SEO
  • UK Best SEO Software 2012: Majestic SEO / Receptional
Before all these, the best thing I ever won in search was a Bath tab foot from Vintage Bath and Tub:


Thank you everyone… here are just a FEW of the many tweets…

Google, Education and Mediocrity

This evening, I searched for “how to take your GCSE exams early”.

My daughter is 12. I know that GCSEs are designed for 16 year olds, but she is ambitious and inquisitive. She wants to know her options. For those outside the UK… GCSEs are exams kids normally take at 16. (The rest of this post is country agnostic.)

I was surprised to find Google’s responses pandered to what can only be described as mediocrity. A character trait to which I felt Google has strived to avoid at all costs. The overwhelming message in the results was to suggest that taking GCSEs early was detrimental for children.

This is concerning, on so many levels. It would seem that all of the results stem from a single Ofsted report which starts by saying that bright children have been successfully taking GCSEs early for years, but recent changes in behaviour are creating slightly below average results. Looking at the underlying data, I think that this ignores the aspirations of the child and their maturity in seeing that, quite frankly, they are able to retake if they do just “OK” and why shouldn’t they? But educational aspirations aside, Google has failed on so many levels on this one:

Where is the answer?

I queried “how to take your GCSE exams early”? Telling me that doing so is bad for me (or my daughter’s) health is certainly not answering my query. It is indoctrinating me and answering a completely different question which – if I was a cynic (and I am) I would say that the result on the right are deflecting my question. The results say:

1: Children lose out by doing what you have asked

2: Duplicate: Doing GSCEs are bad for you

3: Why would you want to do what you asked?

4: No. We won’t let you do thatin French

5: Why would you do what you asked? (Dupe?)

6: You are about to damage your child

7:  You are about to damage your child

8: Bright children lose out (actually… the report did not exactly say that).

9: Ofsted study referenced again

10: The actual study which the other results refer to.

That’s the results I see on the image on the right. They seem to try to affect my opinion, rather than resolve my query. If I was Google, I would worry about this result, because it doesn’t answer the question. But worse – it tries to indoctrinate me into NOT trying to excel.

Would Google be happy with Mediocrity?

I doubt it. Google is anything BUT a mediocre company. I admire them more than any company on earth except the guys at Majestic (see.. no link). For Google to be seen to subvert what is, after all, an entirely aspirational query should send alarm bells through the hollowed halls of Mountainview… or at least get the volleyball players off the pitch and back into the spam team room… because this result is the most insidious for of spam. The results are clearly wrong… but they are the Guardian, the BBC and the Independent. Are we to be dragged into the Fourth Estate by the very organisation that was looking to change the paradigm? And worse… are we being asked by Google to tell our kids to just give up?

 Why didn’t Google get it right?

This is where is gets more murky. Earlier this year I suspect that Google would have pretty much got this result right. Surely, it is not beyond Google’s ability to answer this question. “How to” is a pretty clear signal in the search query to Google to say that the user wants an answer, not an opinion. Over er the last year or two, Google has fought back against companies that have tried manipulate Google’s search rankings. On the whole they have done well, but they have – at scale – devalued websites that do not come up to their mathematical grade. In the process they have clearly damaged their objective which must remain to harness the world’s information and instead of results being influenced by spammers, results are getting influenced by authority. In this case, it is not for the best.

 And in the Meantime

Whilst writing this, my 12 year old downloaded an app which gave sample GCSE questions. She is aghast that these are the questions that 16 year olds are being asked. She looked at me at the question:

2(x+2) =………

The answer was multiple choice by the way! (If you think that’s hard, the answer was 2x+4).

So… anyone want to offer my 12 year old a Bursary? Because I don’t believe I’ll be doing my child any favors by listening to the results on Google…

One last question…

My child is in the last year of a comprehensive middle school, aged 12. This week she was berated by her English teacher when her interpretation of “clear bullet points” was to number them. She wants to achieve. She wants to do well. Does anyone know the answer to this question:

“how to take your GCSE exams early”

Answers by anything but Gmail please.


Flow Metrics vs Moz Metrics vs Page Rank

MajesticSEO have launched a new set of link metrics, based on iterative flow of data through links. I know I am not fooling anyone in the know that I am impartial – as I am Majestic SEO’s marketing director, but I’m also a MozPro user as well.

So I asked one of our team to go to a random Wikipedia article, take every third word, put the word into Google and select the third result 50 times. I then asked him to record the following data for every URL in his list, so we could all start comparing the new metrics with Moz metrics and with Page Rank. Here’s the resulting list with the following:

Correlation Tables with Page Rank summary:


Domain Authority .787
Moztrust .119
MozRank .014
Citation flow .814
Trust flow .746


The strongest correlation is Citation flow in our study, even though Citation flow does not try to emulate Page Rank. Domain Authority comes in a close second, but at a URL level there really is no competition. Trust flow is not aiming to correlate with Page Rank but it is interesting to look for yourself to see sites with a high Domain Authority or Citation flow but with low Trust flows. Trust flow is something new – and very enlightening.

(Download the full URL list in this Word document)

In actual fact – Citation flow is in several regards a stronger metric than Page Rank because:

  1. It updates daily – not once in a blue moon
  2. It is “pure” in that it is not affected by manual penalties in Google
  3. It can be calculated at the URL, subdomain and root levels – whereas Page Rank is only per page.
  4. Links are not created “equal”, because it lets page strength flow over multiple iterations

(Download the full URL list in this Word document)

page rank comparison < Here it is in Excel with Page Authority added.

Google Search Quality team being transparent

I must say that I have been hugely encouraged by Google’s drive towards a more open communication with the Webmaster community recently. Their monthly search quality briefings and their decision to start encouraging users in Webmaster Central to set up email alerts are really helpful. In fact – so is the whole “Inside Search Quality” blog.

Today I saw that they had a video of a search quality meeting. It was looking at autocorrecting on 10 word phrases (I guess that would be called a decagram). It shows the level of immense detail that goes into algorithm changes.

This move towards proactive transparency is great. It really starts to show that there is SO much “white hat” stiff to get stuck into when optimizing a site that you probably shouldn’t start thinking about the less legitimate stuf for quite some time yet. I am hopeful that this goes some way towards putting clear water between professionals in the industry and dabblers… whereas before I would say there was at best a murky puddle between the two camps. Now there is SO much we can learn from these briefings that you just don’t have time to do this in your “free time”.

Right… where’s that Rel=author button…


Ten Link building tests you can try in a single post.

Last month, Google said they changed “something” to do with links. To be exact, they “switched something off”. Now – I’m pretty confident that the changes just around the corner will be hugely more significant, but in the meantime I thought I would do a post that shows you several ways to test theories about links in Google for yourself… or just see what happens to my tests.

Test 1:

Have a link in your post with a highly irregular anchor combination pointing to a page that you have no interest in that has absolutely no relevance to one or more of the words, and no earthly reason for that page to rank for the anchor text term and see if… after a few weeks… the page ranks in the SERPS.

Test 2:

Have a link in your post with a highly irregular anchor combination pointing to a page that you have no interest in that might have SOME relevance to one or more of the words, but no earthly reason for that page to rank for the anchor text term and see if… after a few weeks… the page ranks in the SERPS.

Do you spot the difference between test one and test two?

Test 3:

Have a look for a page you have no interest in that lingers on the second page of the SERPS for some of the words in your page title (like “link building post tests” at 20 without the quotes) and use a non-descript anchor text to see if – after a few weeks – that page moves up or down. It does help if you choose a search phrase which does not invoke QDF, News, Places, Images or any other results. This test will need replicating several times before you can be confident, because many other factors can change a site’s position that already ranks for a page. Read More…

Test 4:

Can’t tell you about test 4…

Test 5:

Actually Tezt 4 iz here

OK – that image should say “improve” not “discover”… I can’t find a page without Google knowing I found it without way more paranoia than I currently can lay my hands on. The one in the link was 10 for  without quotes when I looked. Oh… yes… that text right there in the line above?… that’s in an image for a reason.

Test 6:

tezt funfen excuse FrenchThis one has the attribute:

Test 7:

Hey guys – can you press this link and mention this post on Google+? Let’s see if we can’t get a few “ripples”? Links are not all about rankings. They are about connections and relationships. If this post is giving you some ideas on how to test theories for yourself, then please pass the post on. Then – in the comments in a few weeks – I can tell you what traffic came to this page from Google+ and also see if anyone’s picture appears in the serps under this post. If it does, then we will be able to say that +1s do indeed affect SERPs – at least for friends of people doing the +1 ing.

Test 8:

Because this post is going to get Tweeted (at least a bit) I can’t really do too many tests on Twitter. However, by using in the Twitter link, then even though Twitter wraps the link in a link, you will be able to see the stats from people linking to this post as the direct effect of my Twitter links here.

Test 9:
You can replicate test 1 with a nofollow link with a similar awkward and unreliable phrase to a similarly obscure URL and see if you can make a difference. If you can then – quite possibly – Google has stopped taking any notice of Nofollows. That would be a surprise, given that they pushed for the tag in the first place, but personally I feel that noFollows never had the effect they were intended for.

Test 10:

Of course – this post is full of tests. But the pure amongst you will recognize that the strongest tests are not carried out in such an exposed environment and also follow the following pattern:

Hypothesis: “I think that the First Tuesday in the month always ends up on the same date”

Then. Try and disprove the hypothesis. This is a much better way of approaching testing, because it is MUCH easier to DISPROVE something than PROVE something. Proving that the First Tuesday will always be on the same date is pretty hard. But disproving this is much easier. (See what I did there? changed the paradigm.)

I’ll leave the comment links in – but only if they don’t have any commercial intent whatsoever. Save that comment spam for another post please.

Some stuff about Google’s Crawler

Pierre Far is a Googler. I expect he’d appreciate that I pointed him out on G+. He spoke a bit at ThinkVisibility about the crawler and some of the issues that face the whole information gathering and retrieval process. His pictures weren’t as pretty as the “How Majestic Works” infographic, but there was some useful substance in there.

For example: Did you know that Google only checks Robots.txt about once per day to help keep the load off your server? and that having a +1 button on your site can override robots.txt? These are some of the things that he brought up in his very interesting presentation. I made some notes as I went along. I hope they are legible…

Google sets a conservative crawl rate per server. So too many domains or URLs will reduce crawl rate per URL

If you use shared hosting, then this could easily be problematic for you. If you do not know how many other websites are on the same IP number as you, then you may be surprised. You can easily check this by putting your domain or IP number into Majestic’s neighbourhood checker to see how many other websites we saw on the same IP number. currently is on a server with 10 sites. But there could be hundreds. More importantly, if one site has a massive amount of URLs… and it is not yours… then you could be losing crawl opportunities, just because there’s a big site that isn’t connected to you in any way on the same IP number. You can’t really go complaining to Google about this. You bought the cheap hosting, and this is one of the sacrifices you made.

If a CMS has huge duplication, Google then knows, and this is how it notifies you of duplicates on WMT.

This is interesting because it is more efficient to realize a site has duplicate urls at this point than after Google has had to analyze all the data and deduplicae on your behalf.

Google then picks URLS in a chosen order

I asked Pierre what factors affected which URLs were selected. In truth I asked if deep links to urls were likely to prioritize those urls for a higher crawl rate than other pages. Of course I believe deep links will change this priority, but had to ask. I was just given:

Change Rate of page content will change this.

Which is not quite what I asked – but nice to know.

Google checks Robots.txt about once per day. Not every visit.

This was interesting to me. Majestic checks more often and you would be surprised at how simply checking Robots.txt annoys some people. Maybe less is more.

Google then crawls the URLs and sends feedback to scheduler.

If server spikes with 500 errors, Googlebot backs off. Also (as with Majestic) firewalls etc can block the bot. This can – after a few days – create a state in Google, that says the site is dead. The Jquery blog had this issue.

If 503 error on robots.txt they stop crawling.

OK. Don’t do that then 🙂

Biggest and smallest ISPs can block Googlebot at the ISP level.

That was good to see that other crawlers face this issue. Because ISPs need to protect their bandwidth, the fact that you want Google to visit you site does not necessarily mean it will be so. Firewalls at the ISP may block bots even before they see your home page. They may (more likely) start throttling bits. So if your pages are taking a long time to get indexed, this may be a factor.

Strong recommendation – set up email notifications in Web Master Tool.

Pierre did not understand why we were not all doing this. If Google has crawling errors – or other things that they would like to warn us about – then an email notification trumps waiting for us to log back in to Webmastertools. I’ll be setting mine up right after this post.

Getting better and better at seeing .js files.

At least – I think that’s what he said.

Soft error pages create an issue and so Google tries hard to detect those.

If they can’t, they end up crawling the soft error as a crawl slot (at the expense of another URL crawl, maybe). So if you don’t know what a soft error is, it is when an error page returns a 200 response instead of a 400 (usually 404) response. You can “ping” a random non-existent url on your site to check this using Receptional’s free http header checker if you want.

Google then analyses the content. If it is no index, then that’s it.

There was a question from the audience: “Is Google keeping up with the growth of the web?” Pierre likes to think they are, but admitted it was hard to tell.

Serving the data back to you:

Google receives your incoming query and searches the Index.

Err – yes. Google does not try to scan the whole web in real time. Non-techies don’t realize this it seems.

Magic produces ordered links.

No questions allowed on the magic!

On displaying result, Google needs to:

  • Pick a url
  • Pick title: usually title tag, sometimes change tag based on user query. This is win win for everyone
  • Generate Snippet: will create stuff on page, but strongly recommends using rich snippets.
  • Generates Site-links: depends on query and result as to whether this appears. If you see a bad site-link issue (wrong link) check for canonicalisation issue.

A +1 button can override Robots.txt, on the basis that it is a stronger signal than Robots.txt.

Question from the audience: “Why are rich snippets showing are so volatile?” Google has noticed people spamming rich snippets recently, so he said maybe that was a reason for increased testing.

Pierre was completely unable to talk about using +1 as a ranking signal. (whether by policy or because it was not his part of the ship)

Q: “How can we prioritize the crawl to get new content spidered?” A: Pierre threw it back. Do some simple maths. 1 URL/second is 8400 per day. Google is unlikely to hit your site continually for 24 hours, so large amounts of new content can take time to crawl.

Q: “What error message should you use if your site comes offline for a while?” A: 503, but be careful if only some of your site is offline not to serve a 503 on robots.txt.

OK – that was about it. Thanks Pierre for the help.

Oh – nearly forgot – Pierre would like to point out that all this is in the Google Webmaster Documentation.