Basic search engine optimization (SEO) is fundamental. And essential. SEO will help you position your website properly to be found at the most critical points in the buying process or when people need your site.
What are search engines looking for? How can you build your website in a way that will please both your visitors/customers, as well as Google, Bing, and other search engines? Most importantly, how can SEO help your web presence become more profitable?
During the Introduction to SEO session at SES New York, Carolyn Shelby (@CShel), Director of SEO, Chicago Tribune/435 Digital, fully explained the extreme value SEO can deliver to a site, and stressed the importance of basic SEO using the following analogy:
"Skipping the basics and spending all your time and money on social and 'fancy stuff' is the same as skipping brushing your teeth and showering, but buying white strips and wearing expensive cologne," Shelby said.
Although the Introduction to SEO session was intended for industry newcomers, Shelby's tips offer important reminders for even experienced SEO professionals who have been optimizing sites for years.
SEO, the art and science of ranking well in search engines, is one of those things that is easy to learn but hard to master, so let’s focus on the easy part. You’ve got a website and it’s not ranking in Google so well for whatever search term you are coveting. So what do you do?
The goal of foundational SEO isn't to cheat or "game" the search engines. The purpose of SEO is to:
Main Process
The process can successfully practiced in a bedroom or a workplace, but it has traditionally involved mastering many skills as they arose including diverse marketing technologies including but not limited to:
Like the title element and unlike the meta keywords tag, this one is important, both from a human and search engine perspective.
<meta name="Description" content="Get your site on the first page of Google, Yahoo and Bing. Call us on 0845 094 0839. A company based in Scotland." />
Forget whether or not to put your keyword in it, make it relevant to a searcher and write it for humans, not search engines. If you want to have this 20 word snippet which accurately describes the page you have optimised for one or two keyword phrases when people use Google to search, make sure the keyword is in there.
I must say, I normally do include the keyword in the description as this usually gets it in your serp snippet, but I think it would be a fair guess to think more trusted sites would benefit more from any boost a keyword in the meta description tag might have, than an untrusted site would.
Google looks at the description but there is debate whether it actually uses the description tag to rank sites. I think they might at some level, but again, a very weak signal. I certainly don’t know of an example that clearly shows a meta description helping a page rank.
I can’t find any definitive proof online that says you need to use Heading Tags (H1, H2, H3, H4, H5, H6) or that they improve rankings in Google, and I have seen pages do well in Google without them – but I do use them, especially the H1 tag on the page.
For me it’s another piece of a ‘perfect’ page, in the traditional sense, and I try to build a site for Google and humans.
<h1>This is a page title</h1>
I still generally only use one <h1> heading tag in my keyword targeted pages – I believe this is the way the W3C intended it be used in HTML4 – and I ensure they appear at the top of a page above relevant page text and written with my main keywords or keyword phrases incorporated.
I have never experienced any problems using CSS to control the appearance of the heading tags making them larger or smaller.
You can use multiple H1s in HTML5, but most sites I find I work on still use HTML4.
I use as many H2 – H6 as is necessary depending on the size of the page, but generally I use H1, H2 & H3. You can see here how to use header tags properly (basically just be consistent, whatever you do, to give your users the best user experience).
How many words in the H1 Tag? As many as I think is sensible – as short and snappy as possible usually.
I also discovered Google will use your Header tags as page titles at some level if your title element is malformed.
As always be sure to make your heading tags highly relevant to the content on that page and not too spammy, either.
Instead of thinking about the quantity of the text, you should think more about the quality of the content on the page. Optimise this with searcher intent in mind. Well, that’s how I do it.
I don’t find that you need a minimum amount of words or text to rank in Google. I have seen pages with 50 words out rank pages with 100, 250, 500 or 1000 words. Then again I have seen pages with no text rank on nothing but inbound links or other ‘strategy’. In 2015, Google is a lot better at hiding away those pages, though.
At the moment, I prefer long form pages with a lot of text although although I still rely heavily on keyword analysis to make my pages. The benefits of longer pages are that they are great for long tail key phrases. Creating deep,information rich pages really focuses the mind when it comes to producing authoritative, useful content.
Every site is different. Some pages, for example, can get away with 50 words because of a good link profile and the domain it is hosted on. For me, the important thing is to make a page relevant to a user’s search query.
I don’t care how many words I achieve this with and often I need to experiment on a site I am unfamiliar with. After a while, you get an idea how much text you need to use to get a page on a certain domain into Google.
One thing to note – the more text you add to the page, as long as it is unique, keyword rich and relevant, the more that page will be rewarded with more visitors from Google.
There is no optimal number of words on a page for placement in Google. Every website – every page – is different from what I can see. Don’t worry too much about word count if your content is original and informative. Google will probably reward you on some level – at some point – if there is lots of unique text on all your pages.
There is no one-size-fits-all keyword density, no optimal percentage guaranteed to rank any page at number 1. However, I do know you can keyword stuff a page and trip a spam filter.
Most web optimisation professionals agree there is no ideal percent of keywords in text to get a page to number 1 in Google. Search engines are not that easy to fool, although the key to success in many fields doing simple things well (or at least better than the competition).
I write natural page copy where possible always focused on the key terms – I never calculate density in order to identify the best % – there are way too many other things to work on. I have looked into this. If it looks natural, it’s ok with me.
ALT tags are very important and I think a very rewarding area to get right. I always put the main keyword in an ALT once when addressing a page.
Don’t optimise your ALT tags (or rather, attributes) JUST for Google!
Use ALT tags (or rather, ALT Attributes) for descriptive text that helps visitors – and keep them unique where possible, like you do with your titles and meta descriptions.
Don’t obsess. Don’t optimise your ALT tags just for Google – do it for humans, for accessibility and usability. If you are interested, I ran a simple test using ALT attributes to determine how many words I could use in IMAGE ALT text that Google would pick up.
Clean URLs (or search engine friendly urls) are just that – clean, easy to read, simple.
You do not need clean urls in a site architecture for Google to spider a site successfully (confirmed by Google in 2008), although I do use clean urls as a default these days, and have done so for years.
It’s often more usable.
Is there a massive difference in Google when you use clean urls?
No, in my experience it’s very much a second or third order affect, perhaps even less, if used on its own. However – there it is demonstrable benefit to having keywords in urls.
The thinking is that you might get a boost in Google SERPS if your URLs are clean – because you are using keywords in the actual page name instead of a parameter or session ID number (which Google often struggles with).
I think Google might reward the page some sort of relevance because of the actual file / page name. I optimise as if they do.
It is virtually impossible to isolate any ranking factor with a degree of certainty.
Where any benefit is slightly detectable is when people (say in forums) link to your site with the url as the link.
Then it is fair to say you do get a boost because keywords are in the actual anchor text link to your site, and I believe this is the case, but again, that depends on the quality of the page linking to your site – i.e. if Google trusts it and it passes Page Rank (!) and anchor text relevance.
And of course, you’ll need citable content on that site of yours.
Sometimes I will remove the stop-words from a url and leave the important keywords as the page title because a lot of forums garble a url to shorten it. Sometimes I will not – and prefer to see the exact phrase I am targeting as the name of the url I am asking Google to rank.
I configure urls the following way;
It should be remembered it is thought although Googlebot can crawl sites with dynamic URLs, it is assumed by many webmasters there is a greater risk that it will give up if the urls are deemed not important and contain multiple variables and session IDs (theory).
As standard, I use clean URLS where possible on new sites these days, and try to keep the URLs as simple as possible and do not obsess about it.
That’s my aim at all times when I optimise a website to work better in Google – simplicity.
Google does look at keywords in the URL even in a granular level.
Having a keyword in your URL might be the difference between your site ranking and not – potentially useful to take advantage of long tail search queries – for more see Does Google Count A Keyword In The URI (Filename) When Ranking A Page?
As I mentioned in my ALT Tags section, some webmasters claim putting your keywords in bold or putting your keywords in italics is a beneficial ranking factor in terms of search engine optimizing a page.
It is essentially impossible to test this, and I think these days, Google could well be using this (and other easy to identify on page optimisation efforts) to identify what to punish a site for, not promote it in SERPS.
Any item you can ‘optimise’ on your page – Google can use this against you to filter you out of results.
I use bold or italics these days specifically for users.
I only use emphasis if it’s natural or this is really what I want to emphasise!
Do not tell Google what to filter you for that easily.
I think Google treats websites they trust far different to others in some respect.
That is, more trusted sites might get treated differently than untrusted sites.
Keep it simple, natural, useful and random.
My advice would be to keep it consistent whatever you decide to use.
I prefer absolute urls. That’s just a preference. Google will crawl either if the local setup is correctly developed.
Relative just means relative to the document the link is on.
Move that page to another site and it won’t work.
With an absolute URL, it would work.
Sometimes I use subfolders and sometimes I use files. I have not been able to decide if there is any real benefit (in terms of ranking boost) to using either. A lot of CMS these days use subfolders in their file path, so I am pretty confident Google can deal with either.
I used to prefer files like .html when I was building a new site from scratch, as they were the ’end of the line’ for search engines, as I imagined it, and a subfolder (or directory) was a collection of pages.
I used to think it could take more to get a subfolder trusted than say an individual file and I guess this sways me to use files on most websites I created (back in the day). Once subfolders are trusted, it’s 6 or half a dozen, what the actual difference is in terms of ranking in Google – usually, rankings in Google are more determined by how RELEVANT or REPUTABLE a page is to a query.
In the past, subfolders could be treated differently than files (in my experience).
Subfolders can be trusted less than other subfolders or pages in your site, or ignored entirely. Subfolders *used to seem to me* to take a little longer to get indexed by Google, than for instance .html pages.
People talk about trusted domains but they don’t mention (or don’t think) some parts of the domain can be trusted less. Google treats some subfolders….. differently. Well, they used to – and remembering how Google used to handle things has some benefits – even in 2015.
Some say don’t go beyond 4 levels of folders in your file path. I haven’t experienced too many issues, but you never know.
UPDATED – I think in 2015 it’s even less of something to worry about. There’s so much more important elements to check.
Google doesn’t care. As long as it renders as a browser compatible document, it appears Google can read it these days.
I prefer php these days even with flat documents as it is easier to add server side code to that document if I want to add some sort of function to the site.
Does Google rank a page higher because of valid code? The short answer is no, even though I tested it on a small scale test with different results.
Google doesn’t care if your page is valid html and valid css. This is clear – check any top ten results in Google and you will probably see that most contain invalid HTML or CSS. I love creating accessible websites but they are a bit of a pain to manage when you have multiple authors or developers on a site.
If your site is so badly designed with a lot of invalid code even Google and browsers cannot read it, then you have a problem.
Where possible, if commissioning a new website, demand at least minimum web accessibility compliance on a site (there are three levels of priority to meet), and aim for valid html and css. Actually this is the law in some countries although you would not know it, and be prepared to put a bit of work in to keep your rating.
Valid HTML and CSS are a pillar of best practice website optimisation, not strictly a part of professional search engine optimisation. It is one form of optimisation Google will not penalise you for.
Addition – I usually still aim to follow W3C recommendations that actually help deliver a better user experience;
Hypertext links. Use text that makes sense when read out of context. W3C Top Ten Accessibility Tips
Rather than tell Google via a 404 or some other command that this page isn’t here any more, consider permanently redirecting a page to a relatively similar page to pool any link equity that page might have.
My general rule of thumb is to make sure the information (and keywords) are contained in the new page – stay on the safe side.
Most already know the power of a 301 redirect and how you can use it to power even totally unrelated pages to the top of Google for a time – sometimes a very long time.
Google seems to think server side redirects are OK – so I use them.
You can change the focus of a redirect but that’s a bit black hat for me and can be abused – I don’t really talk about that sort of thing on this blog. But it’s worth knowing – you need to keep these redirects in place in your htaccess file.
Redirecting multiple old pages to one new page – works for me, if the information is there on the new page that ranked the old page.
NOTE – This tactic is being heavily spammed in 2015. Be careful with redirects. I think I have seen penalties transferred via 301s. I also WOULDN’T REDIRECT 301s blindly to your home page. I’d also be careful of redirecting lots of low quality links to one url. If you need a page to redirect old urls to, consider your sitemap or contact page. Audit any pages backlinks BEFORE you redirect them to an important page.
I’m seeing CANONICALS work just the same as 301s in 2015 – though they seem to take a little longer to have an impact.
Hint – a good tactic at the moment is to CONSOLIDATE old, thin under performing articles Google ignores, into bigger, better quality articles.
I usually then 301 all the pages to a single source to consolidate link equity and content equity. As long as the intention is to serve users and create something more up-to-date – Google is fine with this.
Webmasters are often confused about getting penalised for duplicate content, which is a natural part of the web landscape, especially at a time when Google claims there is NO duplicate content penalty.
The reality in 2015 is that if Google classifies your duplicate content as THIN content then you DO have a very serious problem that violates Google’s website performance recommendations and this ‘violation’ will need ‘cleaned’ up.
Duplicate content generally refers to substantive blocks of content within or across domains that either completely match other content or are appreciably similar. Mostly, this is not deceptive in origin…..
It’s very important to understand that if, in 2015, as a webmaster you republish posts, press releases, news stories or product descriptions found on other sites, then your pages are very definitely going to struggle to gain in traction in Google’s SERPS (search engine results pages).
Google doesn’t like using the word ‘penalty’ but if your entire site is made of entirely of republished content – Google does not want to rank it. If you have a multiple site strategy selling the same products – you are probably going to cannibalise your own traffic in the long run, rather than dominate a niche, as you used to be able to do.
This is all down to how the search engine deals with duplicate content found on other sites – and the experience Google aims to deliver for it’s users – and it’s competitors.
Mess up with duplicate content on a website, and it might look like a penalty, as the end result is the same – important pages that once ranked might not rank again – and new content might not get crawled as fast as a result.
Your website might even get a ‘manual action’ for thin content. Worse case scenario your website is hit by the GOOGLE PANDA algorithm.
A good rule of thumb is do NOT expect to rank high in Google with content found on other, more trusted sites, and don’t expect to rank at all if all you are using is automatically generated pages with no ‘value add’.
See my latest post on Google Advice on Duplicate Content.
The simplest piece of advice I ever read about creating a website / optimising a website was years ago and it is still useful today:
make sure all your pages link to at least one other in your site
This advice is still sound today and the most important piece of advice out there in my opinion. Yes it’s so simple it’s stupid.
Check your pages for broken links. Seriously, broken links are a waste of link power and could hurt your site, drastically in some cases. Google is a link based search engine – if your links are broken and your site is chock full of 404s you might not be at the races.
Here’s the second best piece of advice in my opinion seeing as we are just about talking about website architecture;
link to your important pages often internally, with varying anchor text in the navigation and in page text content
…. especially if you do not have a lot of Pagerank to begin with!
What is a xml sitemap and do I need one to seo my site for Google?
(The XML Sitemap protocol) has wide adoption, including support from Google, Yahoo!, and Microsoft
No. You do NOT, technically, need an XML Sitemap to optimise a site for Google if you have a sensible navigation system that Google can crawl and index easily. HOWEVER – in 2015 – you should have a Content Management System that produces one as a best practice – and you should submit that sitemap to Google in Google Webmaster Tools. Again – best practice. Google has said very recently XML and RSS is still a very useful discovery method for them to pick out recently updated content on your site.
An XML Sitemap is a file on your server with which you can help Google easily crawl & index all the pages on your site. This is evidently useful for very large sites that publish lots of new content or updates content regularly.
Your web pages will still get into search results without an xml sitemap if Google can find them by crawling your website, if you:
Remember – Google needs links to find all the pages on your site, and links spread Pagerank, that help pages rank – so an xml sitemap is not quite a substitute for a great website architecture.
Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional metadata about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site) so that search engines can more intelligently crawl the site.
Most modern CMS auto-generate xml sitemaps, and Google does ask you submit a site map in webmaster tools, and I do these days.
I prefer to manually define my important pages by links and depth of content, but a XML sitemap is a best practice in 2015 for most sites.
Does the second anchor text link on a page count?
One of the more interesting discussions in the webmaster community of late has been trying to determine which links Google counts as links on pages on your site. Some say the link Google finds higher in the code, is the link Google will ‘count’, if there are two links on a page going to the same page.
I tested this (a while ago now) with the post Google Counts The First Internal Link.
For example (and I am talking internal here – if you took a page and I placed two links on it, both going to the same page? (OK – hardly scientific, but you should get the idea).
Will Google only ‘count’ the first link? Or will it read the anchor txt of both links, and give my page the benefit of the text in both links especially if the anchor text is different in both links? Will Google ignore the second link?
What is interesting to me is that knowing this leaves you with a question. If your navigation aray has your main pages linked to in it, perhaps your links in content are being ignored, or at least, not valued.
I think links in body text are invaluable. Does that mean placing the navigation below the copy to get a wide and varied internal anchor text to a page?
Perhaps.
As I said, I think this is one of the more interesting talks in the community at the moment and perhaps Google works differently with internal links as opposed to external; links to other websites.
I think quite possibly this could change day to day if Google pressed a button, but I optimise a site thinking that only the first link on a page will count – based on what I monitor although I am testing this – and actually, I usually only link once from page to page on client sites, unless it’s useful for visitors.
When it comes to Google SEO, the rel=canonical link element has become *VERY* IMPORTANT over the years and NEVER MORE SO.
This element is employed by Google, Bing and other search engines to help them specify the page you want to rank out of duplicate and near duplicate pages found on your site, or on other pages on the web.
In the video above, Matt Cutts from Google shares tips on the new rel=”canonical” tag (more accurately – the canonical link element) that the 3 top search engines now support.
Google, Yahoo!, and Microsoft have all agreed to work together in a
“joint effort to help reduce duplicate content for larger, more complex sites, and the result is the new Canonical Tag”.
Example Canonical Tag From Google Webmaster Central blog:
<link rel="canonical" href="http://www.example.com/product.php?item=swedish-fish" />
The process is simple. You can put this link tag in the head section of the duplicate content urls, if you think you need it.
I add a self referring canonical link element as standard these days – to ANY web page.
Is rel=”canonical” a hint or a directive?
It’s a hint that we honor strongly. We’ll take your preference into account, in conjunction with other signals, when calculating the most relevant page to display in search results.Can I use a relative path to specify the canonical, such as <link rel=”canonical” href=”product.php?item=swedish-fish” />?
Yes, relative paths are recognized as expected with the <link> tag. Also, if you include a<base> link in your document, relative paths will resolve according to the base URL.Is it okay if the canonical is not an exact duplicate of the content?
We allow slight differences, e.g., in the sort order of a table of products. We also recognize that we may crawl the canonical and the duplicate pages at different points in time, so we may occasionally see different versions of your content. All of that is okay with us.What if the rel=”canonical” returns a 404?
We’ll continue to index your content and use a heuristic to find a canonical, but we recommend that you specify existent URLs as canonicals.What if the rel=”canonical” hasn’t yet been indexed?
Like all public content on the web, we strive to discover and crawl a designated canonical URL quickly. As soon as we index it, we’ll immediately reconsider the rel=”canonical” hint.Can rel=”canonical” be a redirect?
Yes, you can specify a URL that redirects as a canonical URL. Google will then process the redirect as usual and try to index it.What if I have contradictory rel=”canonical” designations?
Our algorithm is lenient: We can follow canonical chains, but we strongly recommend that you update links to point to a single canonical page to ensure optimal canonicalization results.Can this link tag be used to suggest a canonical URL on a completely different domain?
**Update on 12/17/2009: The answer is yes! We now support a cross-domain rel=”canonical” link element.**
Now that your site complies with the Act – you’ll want to ensure your website never looks obviously out of date.
While you are editing your footer – ensure your copyright notice is dynamic and will change year to year – automatically.
It’s simple to display a dynamic date in your footer in WordPress, for instance, so you never need to change your copyright notice on your blog when the year changes.
This little bit of code will display the current year. Just add it in your theme’s footer.php and you can forget about making sure you don’t look stupid, or give the impression your site is out of date and unused, at the beginning of every year.
© Copyright 2004 - <?php echo date("Y") ?>
A simple and elegant php copyright notice for WordPress blogs.
You can take your information you have from above and transform it with Schema.org markup to give even more accurate information to search engines.
From this….
<div> <p> © Copyright 2006-2015 Softron.in, Company No. SC299002 | VAT No. 880 5135 26 <br> The Stables, 24 Patrick Street, Greenock, PA16 8NB, Scotland, UK | TEL: 0845 094 0839 | FAX: 0845 868 8946<br> Business hours are 09.00 a.m. to 17.00 p.m. Monday to Friday - Local Time is <span id="time">9:44:36</span> (GMT) </p> </div>
too this.
<div> <div itemscope="" itemtype="http://schema.org/LocalBusiness"> © Copyright 2006-2015 <span itemprop="name">Softron.in/span> <div itemprop="address" itemscope="" itemtype="http://schema.org/PostalAddress"> ADDRESS: <span itemprop="streetAddress">24 Patrick Street</span>, <span itemprop="addressLocality">Greenock</span>, <span itemprop="addressRegion">Scotland</span>, <span itemprop="postalCode">PA16 8NB</span>, <span itemprop="addressCountry">GB</span> | TEL: <span itemprop="telephone">0845 094 0839</span> | FAX: <span itemprop="faxNumber">0845 868 8946</span> | EMAIL: <a href="mailto:info@Softron.in" itemprop="email">info@Softron.in</a>. </div> <span itemprop="geo" itemscope="" itemtype="http://schema.org/GeoCoordinates"> <meta itemprop="latitude" content="55.9520367"> <meta itemprop="longitude" content="-4.7667952"> </span> <span>Company No. SC299002</span> | VAT No.<span itemprop="vatID">880 5135 26</span> | Business hours are <time itemprop="openingHours" datetime="Mo,Tu,We,Th,Fr 09:00-17:00">09.00 a.m. to 17.00 p.m. Monday to Friday</time> Local Time is <span id="time">9:46:20</span> (GMT) </div> <span class="rating-desc" itemscope="" itemtype="http://schema.org/Product"> <span itemprop="name">Softron.in Web SEO Services</span> <span itemprop="aggregateRating" itemscope="" itemtype="http://schema.org/AggregateRating"> Rated <span itemprop="ratingValue">4.8</span> / 5 based on <span itemprop="reviewCount">6</span> reviews. | <a class="ratings" href="https://plus.google.com/b/113802450121722957804/113802450121722957804/about/p/pub?review=1">Review Us</a> </span> </span> </div>
Tip: Note the code near the end of the above example, if you are wondering how to get yellow star ratings in Google results pages.
I got yellow stars in Google within a few days of adding the code to my website template – directly linking my site to information Google already has about my business.
Also – you can modify that link to plus.google.com to link directly to your REVIEWS page on Google Plus to encourage people to review your business.
Now you can have a website footer that helps your business comply with UK Law, is more usable, automatically updates the copyright notice year – and helps your website stick out in Google SERPS.
More Reading
Search engines want to do their jobs as best as possible by referring users to websites and content that is the most relevant to what the user is looking for. So how is relevancy determined?
Search engine spiders only have a certain amount of data storage, so if you're performing shady tactics or trying to trick them, chances are you're going to hurt yourself in the long run. Items the search engines don't want are:
While this is pretty obvious, so many people tend to not sit down and just focus on what their main goals are. Some questions you need to ask yourself are:
Keyword strategy is not only important to implement on-site, but should extend to other off-site platforms, which is why you should also be thinking about multi-channel optimization. These multi-channel platforms include:
Being consistent with keyword phrases within these platforms will not only help your branding efforts, but also train users to use specific phrases you're optimizing for.
Domain naming is so important to your overall foundation, so as a best practice you're better off using sub-directory root domains (example.com/awesome) versus sub-domains (awesome.example.com). Some other best practices with domain names are:
In addition to optimizing for the desktop experience, make sure to focus on mobile and tablet optimization as well as other media.
Your content on your site should have title tags and meta descriptions.
Title tags should also be unique! Think your title as a 4-8 word ad, so do your best to entice the reader so they want to click and read more.
Here are some (hopefully) simple things you can do, or even better tell someone else to do, to get your SEO strategy in gear:
1. Figure Out Your Target Audience
Until you know who you are targeting there is not much point in doing
SEO. What words are your potential customers searching with when you
want to be found? What are different modes are they in when they are
searching? Are they ready to buy? Are they just doing research? Are
they big spenders or are they cheapskates?
In general pick terms that match up with
your service, that you think will convert well (conversion is a another
five minute discussion altogether btw) and that have good search volume.
To get an idea of search volume use Google’s Adwords Keyword Tool
which can be found here:
https://adwords.google.com/select/KeywordToolExternal
Once you come up with your target keyword list…
2. Update Your Page Titles
The page title or “title tag” is perhaps the most important element of
SEO. These are the words that appear at the top of your web browser
when you are on a page. They are also the words that show up in the
blue links in Google.
Put the search terms you are targeting in your page titles. In general keep the titles as brief as possible while at the same time making them appealing to searchers. No easy trick. Put the most important keywords at the beginning of the title. Don’t worry about getting this perfect the first time as these are very easy to change and Google usually reacts to these changes quickly. And if your website developer tells you these are really hard or expensive to change, get a new website developer.
3. Make Each Page Title Unique
It is also important that all of the pages on your site have unique page
titles. A quick way to see if you have more than one page with the
same title is to do the following search in Google:
site:yoursite.com intitle”the words in the title”
The results of this search will show all of the pages in Google that have these words in the title. Once you identify these problem pages you can update the titles to make them unique.
And make sure you add your city name to the titles as a lot of people search for your service in your city.
It also couldn’t hurt if you added some text to the actual page that
uses the keywords you are targeting as well, in both the body of the
text and the
h1 tag, which is typically the headline of the page. If you don’t have a
page that targets the keywords you are using, add a new page that does.
You also should check the meta descriptions tags of each page to make sure those are unique as well.
4. Add a Few Internal Links
The number of links a page gets from its own site and which pages link
to it matters. The home page is the most important on the site and so
the pages that are linked to from the home page are also important.
Figure out which pages you want to rank the most (and don’t say all of
them) and add links from other pages to these pages. Make sure you use
relevant keywords in the text of those links. For example if you want
to rank the page for “pizza” use the word “pizza” in the text of the
links that go to that page. Try not to use the exact same phrases in
each link to make it look more “natural”. For example in some of the
links use “best pizza” or “man that’s a helluva a pizza”.
5. Add Your Address to Every Page
Ideally every page should have your address and phone number. This is
helpful for users but it also reinforces your location to the search
engines. If your business has multiple locations then you may want to
create a separate page for each location or at least a single page that
lists all locations. Make sure you link to these pages from as many
pages as possible on the site. It would probably be a good idea to list
as many location names as possible on the home page too.
6. Claim Your Profile on Merchant Circle, Google Local Business Center, Yahoo Local, etc.
There are a huge number of yellow pages-like sites that allow you to
update your business information for free. These sites get a lot of
traffic and tend to rank well. At the least you should go to each one,
claim your profile and make sure they are linking to your site. You may
be surprised at how much business you can get from these free listings.
Here’s of sites that offer a free yellow pages listing:
7. Make a Video
And I am not talking about a multimillion dollar production. Ask your
kid to point the camera at you and start talking. Explain your service
and try to be charming. Mention your website a lot. Then upload it to
YouTube and every other free video site and title the video with your
top keywords (e.g. “Best Pizza in Pleasanton”). Make sure your website
is linked to from your profile. Then link to these video pages from
your site with the keywords in the link text. You will be amazed at how
easy it is for these pages to rank for your search terms.
If you want to do something more professional, there are a number of services that can help you including www.spotzer.com, www.mixpo.com, www.spotmixer.com , and www.turnhere.com .
8. Add a Blog To Your Site
A blog is just a simple way to add pages to your website. A good, or
even bad, web developer should be able to set up a simple blog for you
in a few minutes. If you don’t want it super customized it shouldn’t
cost that much. Once it’s up start writing. I am not talking novels or
even journalism. I am talking keywords. If you want to rank for
“Pizza in Pleasanton” write a blog post called “Pizza in Pleasanton:
What’s Cooking Tonight At Joe’s Pizza”. Go to http://blogsearch.google.com/ping
and add your blog’s URL to Google’s blogsearch engine. Now everytime
you write something on the blog it will instantly be added to Google,
and each of those posts has a chance of ranking for the term you are
targeting.
9. Make Sure You Don’t Have Any Technical Issues
There are a number of technical issues that could be preventing your
site from ranking. An easy way to identify them is to sign up your site
to Google Webmaster Tools at www.google.com/webmasters/start.
By copying a short line of code to your site you can get an idea of
some of the common problems that Google is having with it. Google
provides you with some detail about the problem. There is not much you
yourself can likely do about these problems, but you can at least show
them to your website developer or a SEO guy and ask him/her to figure it
out.
10. Get Links
Now none of this stuff will work very well if you don’t have any links
to your site. The big search engines look at links from other sites as a
sign of quality and trust. So you should spend the remainder of your
five minutes thinking about what other sites you think you can get links
from. Here are some of the obvious ones:
- Chambers of commerce/local business groups
- Local business directories/Local newspaper site
- Friends who have sites (including your kid’s blog)
- Partners/Vendors
There are hundreds of other ways to get links like writing articles for other sites, sending out press releases, adding your business info to social media sites, making a fool of yourself in public, etc.
It’s important to understand that SEO is not a one-time thing just like running a TV ad campaign is not a one-time thing. It’s a marketing tactic like any other. And as more people use the Web to find local services, SEO could become one of the more important components of your marketing plan. So get familiar with it today so you can master it tomorrow.
Copyright © 2011 - All Rights Reserved - Softron.in
Template by Softron Technology