Wednesday, July 18, 2007

5 Tips to Effective SEO Keyword Research Analysis

Keyword research and analysis can be a daunting task, when done correctly, and expert keyword research is the foundation to a successful SEO campaign. Many new website owners think the keyword research analysis process is easy. They think free tools, such as the Overture Search Term Suggestion Tool is the profít pill that will bring them ínstant results.
Unfortunately, the frëe tools will only give you a rough guide and a quick indication whether a hunch is worth further research. These frëe keyword research tools are limited to basic information. When performed correctly, expert keyword research exposes so much more - all the gems that are tucked away deep.
Real keyword research requires research AND analysis. There are so many aspects to the process that cannot be left to chance. Attempting to do the keyword research on your own is like going to a veterinarian to fix your car. My advise to all clients I do SEO consulting services for is to simply leave this task to the experts who have the correct keyword research tools and expertise.
Following are 5 tips for effective keyword research analysis:
1. Latent Semantic Indexing (LSI) - Use multi-word phrases
Latent Semantic Indexing (LSI) is a vital element in Search Engine Optimization (SEO) for better keyword rankings in search results. LSI is based on the relationship, the "clustering" or positioning, the variations of terms and the iterations of your keyword phrases.
Expertly knowing LSI and how it can be most useful and beneficial for your SEO and the importance it has with the algorithm updates to search engines like Google, MSN and Yahoo which will benefit your keyword research for best practice SEO.
LSI is NOT new. Those doing keyword research over the years have always known to use synonyms and "long tail" keyword terms which is a simpler "explanation" to LSI. More often than not, these long tail, less generic terms bring more traffíc to your site than the main keyword phrases. The real bottom line is that Latent Semantic Indexing is currently a MUST in keyword research and SEO.
2. Page Specific Keyword Research - Target your niche keyword phrases for each site page
Probably the most common mistake in keyword research is using a plethora of keywords and pasting the same meta keyword tag on every web site page. This is SO not effective! Your keyword research needs to be page specific and only focusing on 2 to 5 keywords per page. It's more work, but combined with best practice SEO, gives each site page a chance for higher ranking on its own.
3. Country Specific Keyword Research and Search Engine Reference
Keep in mind that keyword search terms can be country specific. Even though a country is English speaking, there are different keyword terms you must research - and then reference that country's search engine when doing your initial keyword research. For instance, UK and Australia may have different expressions, terminology and spellings (i.e. colour, personalised). Referencing the terms in the corresponding search engine is an important element to keyword research that is often forgotten. So for example, be sure to chëck the search terms on google.co.uk or au.yahoo.com. And, of course, if you have 3 to 4 really comprehensive research tools in your arsenal, you will be able to search for historical, global and country specific search terms easily and effectively.
4. Keyword Analysis - Cross referencing in the search engines
Once the majority of the keyword research has been done for a site page, it's time to plug those terms into the search engines to determine:
If it is really the desired niche keyword for that page
To assess the competitiveness of your keywords. Along with checking the competitiveness of your keywords you should look at the strength of the competition.
Are the other sites listed for your keywords truly your competitors?
Are the sites listed for your keyword even related to your industry, products or services?
These critical analyses of keyword phrases are often forgotten. Since the keyword research and analysis is the foundation of a successful SEO campaign, you certainly don't want to build your on-page optimization on the wrong niche keywords!
5. Ongoing Keyword Research - Repeat your keyword research on a consistent basis
While you may think that you have completed your keyword research analysis and laid a solid foundation for your SEO, you need to keep monitoring your keywords and tweak as necessary. Keywords can change from month to month as keyword search terms change, genres change and/or if your niche is within social portal networking sites - to name just a few. Maintaining ongoing keyword research is essential for best practice SEO.
Most Successful Strategy to Streamline Your Keyword Research Efforts:
Yes, many website owners will opt to do the keyword research and analysis themselves with only a marginal effect on an SEO campaign. It's not the most successful strategy to use for the most effective results.
To be certain of your keyword data, accurate keyword analysis should be performed - and cross referenced - across multiple expert keyword tools.
Effective keyword research lays the ground work for effective SEO results and can help you kick-start the ranking process - perhaps even giving you a step up on your competitors.
The most successful strategy to streamline your keyword research efforts is to hire an expert. Focus your business efforts on your strengths and expertise and allow the SEO experts to effectively perform the keyword research analysis correctly.

Friday, June 29, 2007

21 Essential SEO Tips & Techniques


Small businesses are growing more aware of the need to understand and implement at least the basics of search engine optimization. But if you read a variety of small businesses blogs and Web sites, you'll quickly see that there's a lot of uncertainty over what makes up "the basics." Without access to high-level consulting and without a lot of experience knowing what SEO resources can be trusted, there's also a lot of misinformation about SEO strategies and tactics.
This article is the second in a two-part SEO checklist specifically for small business owners and webmasters. Last week, I shared 20 "don'ts." Naturally, this week addresses the "Do's"—things to make sure you include whether you're hiring an SEO company or doing it yourself.
Small Business SEO Checklist: The Do's
1. Commit yourself to the process. SEO isn't a one-time event. Search engine algorithms change regularly, so the tactics that worked last year may not work this year. SEO requires a long-term outlook and commitment.
2. Be patient. SEO isn't about instant gratification. Results often take months to see, and this is especially true the smaller you are, and the newer you are to doing business online.
3. Ask a lot of questions when hiring an SEO company. It's your job to know what kind of tactics the company uses. Ask for specifics. Ask if there are any risks involved. Then get online yourself and do your own research—about the company, about the tactics they discussed, and so forth.
4. Become a student of SEO. If you're taking the do-it-yourself route, you'll have to become a student of SEO and learn as much as you can. Luckily for you, there are plenty of great Web resources (like Search Engine Land) and several terrific books you can read. Aaron Wall's SEO Book, Jennifer Laycock's Small Business Guide to Search Engine Marketing, and Search Engine Optimization: An Hour a Day by Jennifer Grappone and Gradiva Couzin are three I've read and recommend.
5. Have web analytics in place at the start. You should have clearly defined goals for your SEO efforts, and you'll need web analytics software in place so you can track what's working and what's not.
6. Build a great web site. I'm sure you want to show up on the first page of results. Ask yourself, "Is my site really one of the 10 best sites in the world on this topic?" Be honest. If it's not, make it better.
7. Include a site map page. Spiders can't index pages that can't be crawled. A site map will help spiders find all the important pages on your site, and help the spider understand your site's hierarchy. This is especially helpful if your site has a hard-to-crawl navigation menu. If your site is large, make several site map pages. Keep each one to less than 100 links. I tell clients 75 is the max to be safe.
8. Make SEO-friendly URLs. Use keywords in your URLs and file names, such as yourdomain.com/red-widgets.html. Don't overdo it, though. A file with 3+ hyphens tends to look spammy and users may be hesitant to click on it. Related bonus tip: Use hyphens in URLs and file names, not underscores. Hyphens are treated as a "space," while underscores are not.
9. Do keyword research at the start of the project. If you're on a tight budget, use the free versions of Keyword Discovery or WordTracker, both of which also have more powerful paid versions. Ignore the numbers these tools show; what's important is the relative volume of one keyword to another. Another good free tool is Google's AdWords Keyword Tool, which doesn't show exact numbers.
10. Open up a PPC account. Whether it's Google's AdWords or Yahoo's Search Marketing or something else, this is a great way to get actual search volume for your keywords. Yes, it costs money, but if you have the budget it's worth the investment. It's also the solution if you didn't like the "Be patient" suggestion above and are looking for instant visibility.
11. Use a unique and relevant title and meta description on every page. The page title is the single most important on-page SEO factor. It's rare to rank highly for a primary term (2-3 words) without that term being part of the page title. The meta description tag won't help you rank, but it will often appear as the text snippet below your listing, so it should include the relevant keyword(s) and be written so as to encourage searchers to click on your listing. Related bonus tip: You can ignore the Keywords meta altogether if you'd like; it's close to inconsequential. If you use it, put misspellings in there, and any related keywords that don't appear on the page.
12. Write for users first. Google, Yahoo, etc., have pretty powerful bots crawling the web, but to my knowledge these bots have never bought anything online, signed up for a newsletter, or picked up the phone to call about your services. Humans do those things, so write your page copy with humans in mind. Yes, you need keywords in the text, but don't stuff each page like a Thanksgiving turkey. Keep it readable.
13. Create great, unique content. This is important for everyone, but it's a particular challenge for online retailers. If you're selling the same widget that 50 other retailers are selling, and everyone is using the boilerplate descriptions from the manufacturer, this is a great opportunity. Write your own product descriptions, using the keyword research you did earlier (see #9 above) to target actual words searchers use, and make product pages that blow the competition away. Plus, retailer or not, great content is a great way to get inbound links.
14. Use your keywords as anchor text when linking internally. Anchor text helps tells spiders what the linked-to page is about. Links that say "click here" do nothing for your search engine visibility.
15. Build links intelligently. Submit your site to quality, trusted directories such as Yahoo, DMOZ, Business.com, Aviva, and Best of the web. Seek links from authority sites in your industry. If local search matters to you (more on that coming up), seek links from trusted sites in your geographic area—the Chamber of Commerce, etc. Analyze the inbound links to your competitors to find links you can acquire, too.
16. Use press releases wisely. Developing a relationship with media covering your industry or your local region can be a great source of exposure, including getting links from trusted media web sites. Distributing releases online can be an effective link building tactic, and opens the door for exposure in news search sites. Related bonus tip: Only issue a release when you have something newsworthy to report. Don't waste journalists' time.
17. Start a blog and participate with other related blogs. Search engines, Google especially, love blogs for the fresh content and highly-structured data. Beyond that, there's no better way to join the conversations that are already taking place about your industry and/or company. Reading and commenting on other blogs can also increase your exposure and help you acquire new links. Related bonus tip: Put your blog at yourdomain.com/blog so your main domain gets the benefit of any links to your blog posts. If that's not possible, use blog.yourdomain.com.
18. Use social media marketing wisely. If your small business has a visual element, join the appropriate communities on Flickr and post high-quality photos there. If you're a service-oriented business, use Yahoo Answers to position yourself as an expert in your industry. With any social media site you use, the first rule is don't spam! Be an active, contributing member of the site. The idea is to interact with potential customers, not annoy them.
19. Take advantage of local search opportunities. Online research for offline buying is a growing trend. Optimize your site to catch local traffic by showing your address and local phone number prominently. Write a detailed Directions/Location page using neighborhoods and landmarks in the page text. Submit your site to the free local listings services that the major search engines offer. Make sure your site is listed in local/social directories such as CitySearch, Yelp, Local.com, etc., and encourage customers to leave reviews of your business on these sites, too.
20. Take advantage of the tools the search engines give you. Sign up for Google's webmaster Central and Yahoo's Site Explorer to learn more about how the search engines see your site, including how many inbound links they're aware of.
21. Diversify your traffic sources. Google may bring you 70% of your traffic today, but what if the next big algorithm update hits you hard? What if your Google visibility goes away tomorrow? Newsletters and other subscriber-based content can help you hold on to traffic/customers no matter what the search engines do. In fact, many of the DOs on this list—creating great content, starting a blog, using social media and local search, etc.—will help you grow an audience of loyal prospects and customers that may help you survive the whims of search engines.
Just like last week, this list could continue well beyond these 21 "DOs." Your additions are welcome in the comments.
With this checklist and last week's list of "Don'ts," you should be able to develop a good plan of attack for your SEO efforts for your small business.
Matt McGee is the SEO Manager for Marchex, Inc., a search and media company offering search marketing services through its TrafficLeader subsidiary. The Small Is Beautiful column appears on Thursdays at Search Engine Land.

Sunday, June 17, 2007

Determining the Size of a Class Object

There are many factors that decide the size of an object of a class in C++. These factors are:
Size of all non-static data members
Order of data members
Byte alignment or byte padding
Size of its immediate base class
The existence of virtual function(s) (Dynamic polymorphism using virtual functions).
Compiler being used
Mode of inheritance (virtual inheritance)
Size of all non-static data membersOnly non-static data members will be counted for calculating sizeof class/object. class A {
private:
float iMem1;
const int iMem2;
static int iMem3;
char iMem4;
};
For an object of class A, the size will be the size of float iMem1 + size of int iMem2 + size of char iMem3. Static members are really not part of the class object. They won't be included in object's layout. <2>Order of data members
The order in which one specifies data members also alters the size of the class. class C {
char c;
int int1;
int int2;
int i;
long l;
short s;
};
The size of this class is 24 bytes. Even though char c will consume only 1 byte, 4 bytes will be allocated for it, and the remaining 3 bytes will be wasted (holes). This is because the next member is an int, which takes 4 bytes. If we don't go to the next (4th) byte for storing this integer member, the memory access/modify cycle for this integer will be 2 read cycles. So the compiler will do this for us, unless we specify some byte padding/packing. If I re-write the above class in different order, keeping all my data members like below: class C {
int int1;
int int2;
int i;
long l;
short s;
char c;
};
Now the size of this class is 20 bytes. In this case, it is storing c, the char, in one of the slots in the hole in the extra four bytes.
Byte alignment or byte paddingAs mentioned above, if we specify 1 byte alignment, the size of the class above (class C) will be 19 in both cases.
Size of its immediate base classThe size of a class also includes size of its immediate base class. Lets take an example: Class B {
...
int iMem1;
int iMem2;
}
Class D: public B {
...
int iMem;
}
In this case, sizeof(D) is will also include the size of B. So it will be 12 bytes.
The existence of virtual function(s)Existence of virtual function(s) will add 4 bytes of virtual table pointer in the class, which will be added to size of class. Again, in this case, if the base class of the class already has virtual function(s) either directly or through its base class, then this additional virtual function won't add anything to the size of the class. Virtual table pointer will be common across the class hierarchy. That is class Base {
public:
...
virtual void SomeFunction(...);
private:
int iAMem
};
class Derived : public Base {
...
virtual void SomeOtherFunction(...);
private:
int iBMem
};
In the example above, sizeof(Base) will be 8 bytes--that is sizeof(int iAMem) + sizeof(vptr). sizeof(Derived) will be 12 bytes, that is sizeof(int iBMem) + sizeof(Derived). Notice that the existence of virtual functions in class Derived won't add anything more. Now Derived will set the vptr to its own virtual function table.
Compiler being usedIn some scenarios, the size of a class object can be compiler specific. Lets take one example: class BaseClass {
int a;
char c;
};
class DerivedClass : public BaseClass {
char d;
int i;
};
If compiled with the Microsoft C++ compiler, the size of DerivedClass is 16 bytes. If compiled with gcc (either c++ or g++), size of DerivedClass is 12 bytes. The reason for sizeof(DerivedClass) being 16 bytes in MC++ is that it starts each class with a 4 byte aligned address so that accessing the member of that class will be easy (again, the memory read/write cycle).
Mode of inheritance (virtual inheritance)In C++, sometimes we have to use virtual inheritance for some reasons. (One classic example is the implementation of final class in C++.) When we use virtual inheritance, there will be the overhead of 4 bytes for a virtual base class pointer in that class. class ABase{
int iMem;
};
class BBase : public virtual ABase {
int iMem;
};
class CBase : public virtual ABase {
int iMem;
};
class ABCDerived : public BBase, public CBase {
int iMem;
};
And if you check the size of these classes, it will be:
Size of ABase : 4
Size of BBase : 12
Size of CBase : 12
Size of ABCDerived : 24 Because BBase and CBase are dervied from ABase virtually, they will also have an virtual base pointer. So, 4 bytes will be added to the size of the class (BBase and CBase). That is sizeof ABase + size of int + sizeof Virtual Base pointer.

Monday, June 4, 2007

Class Design in C++ Understanding Interfaces

When you're designing a class in C++, the first thing you should decide is the public interface for the class. The public interface determines how your class will be used by other programmers (or you), and once designed and implemented it should generally stay pretty constant. You may decide to add to the interface, but once you've started using the class, it will be hard to remove functions from the public interface (unless they aren't used and weren't necessary in the first place). But that doesn't mean that you should include more functionality in your class than necessary just so that you can later decide what to remove from the interface. If you do this, you'll just make the class harder to use. People will ask questions like, "why are there four ways of doing this? Which one is better? How can I choose between them?" It's usually easier to keep things simple and provide one way of doing each thing unless there's a compelling reason why your class should offer multiple methods with the same basic functionality. At the same time, just because adding methods to the public interface (probably) won't break anything that doesn't mean that you should start off with a tiny interface. First of all, if anybody decides to inherit from your class and you then choose a function with the same name, you're in for a boatload of confusion. First, if you don't declare the function virtual, then an object of the subclass will have the function chosen depending on the static type of the pointer. This can be messy. Moreover, if you do declare it virtual, then you have the issue that it might provide a different type of functionality than was intended by the original implementation of that function. Finally, you just can't add a pure virtual function to a class that's already in use because nobody who has inherited from it will have implemented that function. The public interface, then, should remain as constant as possible. In fact, a good approach to designing classes is to write the interface before the implementation because it's what determines how your class interacts with the rest of the world (which is more important for the program as a whole than how the class is actually implemented). Moreover, if you write the interface first, you can get a feel for how the class will work with other classes before you actually dive into the implementation details.
Inheritance and Class DesignThe second issue of your class design is what should be available to programmers who wish to create subclasses. This interface is primarily determined by virtual functions, but you can also include protected methods that are designed for use by the class or its subclasses (remember that protected methods are visible to subclasses while private methods are not). A key consideration is whether it makes sense for a function to be virtual. A function should be virtual when the implementation is likely to differ from subclass to subclass. Vice-versa, whenever a function should not change, then it should be made non-virtual. The key idea is to think about whether to make a function virtual by asking if the function should always be the same for every class. For example, if you have a class is designed to allow users to monitor network traffic and you want to allow subclasses that implement different ways of analyzing the traffic, you might use the following interface: class TrafficWatch
{
public:
// Packet is some class that implements information about network
// packets
void addPacket (const Packet& network_packet);
int getAveragePacketSize ();
int getMaxPacket ();
virtual bool isOverloaded ();
};
In this class, some methods will not change from implementation to implementation; adding a packet should always be handled the same way, and the average packet size isn't going to change either. On the other hand, someone might have a very different idea of what it means to have an overloaded network. This will change from situation to situation and we don't want to prevent someone from changing how this is computed--for some, anything over 10 Mbits/sec of traffic might be an overloaded network, and for others, it would require 100 Mbits/sec on some specific network cables. Finally, when publicly inheriting from any class or designing for inheritance, remember that you should strive for it to be clear that inheritance models is-a. At heart, the is-a relationship means that the subclass should be able to appear anywhere the parent class could appear. From the standpoint of the user of the class, it should not matter whether a class is the parent class or a subclass. To design an is-a relationship, make sure that it makes sense for the class to include certain functions to be sure that it doesn't include that subclasses might not actually need. One example of having an extra function is that of a Bird class that implements a fly function. The problem is that not all birds can fly--penguins and emus, for instance. This suggests that a more prudent design choice might be to have two subclasses of birds, one for birds that can fly and one for flightless birds. Of course, it might be overkill to have two subclasses of bird depending on how complex your class hierarchy will be. If you know that nobody would ever expect use your class for a flightless bird, then it's not so bad. Of course, you won't always know what someone will use your class for and it's much easier to think carefully before you start to implement an entire class hierachy than it will be to go back and change it once people are using it.

Wednesday, May 23, 2007

Introduction to SEO

A search engine is through which most of your potential visitors will discover your site. But for that to happen you first need the search engines in turn to discover your site.
It is only when the search engines 'understand' that your site is one of the best online places to find the kind of quality content that your site provides, is when they 'recommend' your site to people interested in similar content.
If all that sounds a bit abstract, allow me to elaborate.
Search Engines like Google, Yahoo and MSN routinely search the web for content and try to relate content to some keywords which they consider the focus of your content. So for example this tutorial on SEO is related to the term SEO. A web surfer looking for articles on SEO will query these search engines for the term 'SEO' and will get results based upon what search engines think are good sites for content related to the term 'SEO'.
In fact, a search on Google for SEO yields about 17,600,000 results at the time of the writing of this tutorial. Our web surfer looking for SEO related content would definitely not look beyond 15 or at most 20 initial links. The question then is how does the search engine decide which links to place before the other ones on such a search result.
Search Engines rank the pages for the specific search term and then show the pages in the order of those rankings. Google associates what is called a PageRank for each site. PageRank measures from 1 to 10, with a PageRank of 10 specifying a highly relevant site, and the relevancy decreasing in the order of the PageRank. Other search engines like Yahoo and MSN have similar ranking strategies. SERP (Search Engine Result Page) rankings in Google depend on the PageRank and also on how well the web site relates to your searched words. Given two web sites which relate equally well to your search term, the web site with a higher PageRank will be ranked higher in the SERP's.
Search Engine Optimization or SEO refers to how you can optimize your site such that your site is more aligned to the criteria that search engines use to rank web sites.
There is a lot of content online which will tell you a lot of stuff that you can do for SEO. Unfortunately most of these articles will tell you stuff that will waste a lot of your time and will not yield any desirable results. Worse, quite a few will tell you stuff that is outright wrong and are considered search engine spamming by major search engines. Using such techniques can result in your site being black-listed from their SERP's. Fortunately, there are still a lot of right things that you can do so that search engines rank your site favorably on your targeted search term.
The best part is, right here on this tutorial we tell you everything you should be doing and everything you shouldn't be doing for SEO.

Wednesday, May 16, 2007

Is Google Killing SEO?


Paid search listings could become more relevant than organic listings because of the emphasis on inbound links in search algorithms.
It just occurred to me that Google is killing organic SEO. Google's paid search algorithm is allowing the user to be the ultimate SEO. Based on user search behavior -- the type of links clicked on and the amount of time spent on a landing page after leaving Google -- your paid search ads become more relevant, undermining traditional SEO efforts to bring client sites to the top of the SERPs.
Google's paid search algorithm acts almost like a rating system. Google will discover the most popular sites based on user preferences, allowing it to serve highly relevant results based on paid search landing pages. As a result, search engines will likely start serving more sponsored links, and the organic links will start to fade away.
GoogleBot likes info The object of B2B and ecommerce commercial websites is to sell products and services online. These sites have an increasingly difficult time ranking well on Google because the GoogleBot eats up information and spits out products and services. Therefore, organic links are becoming less relevant and have low quality.
The antidote to low-quality organic links is pay-per-click advertising and strategic ad placements such as links on great information sites. These promotions are very effective, especially when displayed on vertical search engines (VSEs). In fact, this could become the ultimate way to do SEO in the future.
Saga of the broken algosI'm not the only guy who sees a weakness in the Google search algorithm. In his article, "Are User-Generated Websites Breaking The Search Engines' Algorithms?" Tim Daly suggested that paid search listings could become more relevant than organic listings because of the emphasis on inbound links in search algorithms.
Google rewards sites with links coming from important, authoritative sites. The company's reasoning is that a site with numerous quality inbound links must be popular, ergo it is a quality site. Sites with higher PageRank scores are given higher rankings than those with a lower PageRank. Once Google gained popularity based on PageRank, all the other search engines followed suit, so this ranking system dominates. Perhaps at the time, it was an excellent ranking variable, but it's becoming outdated today.
As Daly shows with his example of Wikipedia's dominance in the SERPs, an abundance of quality links does not necessarily an authoritative site make. This is a subjective take, based on the weight given to inbound links by Google's PageRank. As demonstrated by the questionable accuracy of some Wikipedia content, it takes more than links alone to prove authority.
Bridge over troubled watersResearch shows that general search engines are losing ground to vertical search engines (VSEs). Outsell reported a 31.9 percent search failure rate among business users on major search engines. This means that roughly one-third of user queries yield unsatisfactory results.
Convera went further by saying general search does not meet the needs of today's business and professional users. General search queries result in time inefficiencies and unmet needs as critical information becomes increasingly difficult to find quickly on the web.
In contrast to general search engines, vertical search engines have built-in preference mechanisms and are constantly rolling out improved features. In my opinion, bidding is the best qualifier. VSE users naturally weed out faulty search engine algorithms. Clients bidding high on irrelevant keywords for the sake of attracting traffic would have their budgets zapped, resulting in a dreadful ROI, and business users wouldn't stand for this.
VSEs also have built-in merchant rating systems similar to those of a power seller on eBay. This further refines the search relevance.

Tuesday, May 15, 2007

Some Things To Consider When Evaluating Your Website

You have either put a lot of effort into your website or you have paid someone else a lot of money to put the effort in for you. Either way, whatever the purpose of the website, you want to get the most out of it. The question now becomes, how you can tell if your website is likely to succeed.
Why?
The first thing to do is to ask yourself why you have set up the website. Are you trying to sell a product or provide information or something else? What do you want to happen when a visitor lands on your pages? "For a man without a destination no wind is favourable" (An old saying attributed to many). If you do not know exactly what you want to happen, how can you expect the visitors to your website to know and do it? You are the one who ought to have the site set up to direct people to their destination. If you don't know what that is, then all is lost.

Your visitors probably know why they visited your site. You too must know why they came and help them do what they came to do. If your website does not provide what they need they will move on to another one. Just because you are getting all the traffic you could hope for does not mean that your site will succeed.
Your Website's Conversion Rate
You will need to measure your success rate. There are a number of ways to do this. One is the Conversion rate. Simply put, the conversion rate is the rate at which you convert visitors into buyers. Or if you are not selling it is the rate at which you convince people to do whatever it is you need them to do. It could be to sign up for your newsletter of subscribe to something else, etc. If you have one hundred visitors to your website per day and you convert two, your conversion rate is two percent.
It is a reasonably good measure of the quality of your website. If your site is not converting, you will know that you need to make changes to the site. However, it could also mean that you marketing or advertising campaign is sending untargeted traffic to your site. In other words sending visitors who are not in the least bit interested in what you have to offer.
SEO & Traffic Generation
The whole point of Search Engine Optimisation (SEO) is traffic generation. The idea is that you optimise or fine tune your website so that it gets to the top of the search results when people enter a search term that is contained in your website. You do this to get traffic. If your site is not properly optimised people are unlikely to find it. Unless of course you have found some narrow niche that nobody else has heard of, which will not bring a lot of visitors. SEO involves using the correct keywords in the correct way and arranging the contents and menus in the right way and most important of all is link building.
To rank highly, at the time of writing, the single most important thing to do to rank highly in the search engines is to increase the number of links to your site from quality websites which have content related to the subject of your site.
Content
It may sound obvious to most people but, the content of your website should be based on the subject of your product. For example if your website is set up to sell computers then it should contain articles about computers and computing etc. not gardening articles. If you have articles on unrelated subjects they will only serve to confuse your visitors and undermine your website's and your own credibility as a sourct of products and information about computers or whatever your website is promoting.
The content should be keyword rich but not saturated or you may show up on spam radar. The content should be broken up into manageable paragraphs and properly laid out with headers for each section, making it easier to read and navigate.
Navigation
Getting the visitors to your website is only half og th battle. You then must give them what they want. What do they want? Well, the first thing they want is to find their way around your website without pulling their hair out in frustration. These days there is far too much competition on the internet for that to happen. They will move to another website at the first sign of difficulty.
Arrange all the links and buttons in a way that is easy to read and understand. Do not over fill each page. If there is too much choice people do not make a choice they just get confused and... you guessed it. They move to another website.
The first page they land on, usually the index page should be interesting. It should be obvious to them that they have landed on a page with the content they came for and the way to navigate to that content should be very clear. Do not try to give them everything on the front page.
How easy is it for your visitors to accomplish what they came to do? Do they need to fill out pages of information or can the do their business in a few clicks? If people have to figure out the puzzle that your website is they will move on unless you have something so attractive and necessary that they will stay at all costs. If you have a product like that then you can not charge enough for it.
It is always a good idea to have a professional web designer look over the site and point out any obvious flaws. I say obvious flaws because not all corrections are obvious and are often discovered through trial and error. You should keep tweaking the website in a continuous attempt to improve it. There is always room for improvement. Though it is also said that you should not fix something that works. I think that the best thing is to make gradual changes and if they are not a major improvement, at least they will not be a major disaster.
If you are not confident enough to do the coding and graphics etc. for your own site, there are many professional web designers out there who live for it. So don't let it stop you from getting you name, product or information out there.

About the Author: Steven Collins is a web designer at Desktop Web Design.

Friday, May 11, 2007

How to Make your URLs SEO Friendly



Without search engine optimization many websites stand the chance of not being fully indexed by search spiders therefore risking not being ranked high enough (if at all) in the search engine results pages (SERPs). The resulting poor conversion rate makes the website a dead weight, demoralizes your staff and could threaten your business.
URL Rewriting
This situation is quite easy to avoid by performing some cosmetic operations to the site. One of these operations, URL rewriting, is considered by some rather difficult and a bit time-consuming but can be extremely effective and rewarding in the long run.
Why It Is Nice to Have Clean URL'sThere are two very strong reasons for you to rewrite your URLs, the first of which is related to Search Engine Optimization. Search engines are much more at ease with URLs that don't contain long query strings.A URL like http://www.example.com/4/basic.html can be indexed very easily whereas its dynamic form, http://www.example.com/cgi-bin/gen.pl?id=4&view=basic, can potentially confuse search engines and cause them to miss important information contained in the URL and you to miss those anticipated high rankings.With clean URLs, the search engines can distinguish folder names and can establish real links to keywords. Query string parameters continue to be an impediment in many search engine's attempts to fully index sites. Several SEO professionals agree that dynamic (or, dirty) URLs are not very appealing to web spiders, while static URLs have greater visibility in their electronic eyes.
The second strong reason for URL rewriting would be the increased usability for web users and "maintainability" for webmasters. Clean URLs are much easier to remember. A regular web surfer will not remember a URL full of parameters, and would obviously be discouraged by the idea of typing the entire URL. This is less prone to happen with clean URLs. Easily remembered URLs help you create a more intuitive Web site and make it easier for your visitors to anticipate where they can find information they need.Webmasters tend to find that maintaining static URLs is a much easier task than working with dynamic ones. Static URLs are more abstract, and thus more difficult to hack. The dynamic URLs are more transparent, allowing possible hackers to see the technology used to build them and thus facilitating attacks.Also, given the length of dynamic URLs, it is very possible for webmasters to make mistakes during maintenance sessions, resulting in broken links. Also, when static URLs are used, the links to the site's pages will still remain valid should it be necessary to migrate a site from one programming language to another (e.g. from Perl to Java).
Dashes vs. UnderscoresWebsites that still use underscores for their URLs are becoming scarcer and scarcer. Some say that people who still use underscores are "old school" while dashes seem be used far more often these days.A usability related reason for using dashes rather than underscores is the elimination of the confusion created between a space and an underscore when the URL is viewed as a link, or when printing such a URL.More to the point, the chances that a combination of keywords contained in your Web site is included in the SERPs increase exponentially when using dashes.For exemple: a URL that contains "seo_techniques" will be shown by the search engine only if the user searches for seo_techniques (this kind of search is rarely performed); whereas searches for "seo", "techniques", or "seo techniques" gives your URL containing "seo-techniques" a better chance of being displayed on the SERPs. The dash will help you more than you can imagine, by greatly improving your visibility on the Web.
How to Rewrite URLs
The principle of URL rewriting is actually setting a "system" on the host server that will allow it (the server) to know how to interpret the new URL format. What actually happens when one decides to rewrite the URLs of a website is called masking the dynamic URLs with static ones. This means that the URLs that previously contained query strings with elements such as "?", "+", "&", "$", "=", or "%" will contain the more search engine friendly "/" (slash) element, presenting themselves in a simplified form.To help you with cleaning your URLs here are some rewriting tools and engines, some free of charge, others fee based.
Online / Open Source Tools
free online URL rewriting
open Source URL Rewriter for .NET / IIS / ASP.NET
open Source rewrite-module tuned for ASP.NET 2.0
mod_rewrite This is the most common non-fee-based rewriting engine. It is a module from the Apache HTTP Server that allows the easy manipulation of URLs. The use of this module requires the enabling of the RewriteEngine on your Apache server. Then, rewrite rules must be defined, (you can even set conditions for each rule), thus allowing the rewrite requests as they come in.In terms of SEO, mod_rewrite can be helpful if you have complex URLs that contain more than 2 parameters. In other words, if one of your dynamic URLs is accessed, the mechanism behind mod_rewrite will "translate" it into a shorter, friendlier, static-looking URL.
Fee-Based Tools
ISAPI_RewriteThe Internet Server Application Program Interface (ISAPI) is another URL manipulation engine that functions in a similar way to Apache's mod_rewrite, the difference being it is designed specifically for Microsoft's IIS (Internet Information Server).
IISRewriteIISRewrite is a stripped down implementation of Apache's mod_rewrite modules for IIS. It is a rule-based rewriting engine that allows a Webmaster to manipulate URLs on the fly in IIS.
URL ExamplesHere are some examples of how URLs can look before and after rewriting:Example 1:
Dynamic URL: http://www.companyname.com/products/items.php?id=x&model=y&variety=z (before rewriting)
Static URL: http://www.companyname.com/x/y/z.html (after rewriting) Example 2:
Dynamic URL: http://www.example.com/cgi-bin/gen.pl?id=4&view=basic (before writing)
Static URL: http://www.example.com/4/basic.html (after writing)
ConclusionsURL rewriting can put you on the right track in the race for high organic rankings when combined with other SEO techniques. Be aware that rewritten (and, presumably, better looking and more effective in terms of search engine ranking) URLs cannot substitute or make up for a poorly designed Web site.Don't expect miracles. Nevertheless, when you decide that your site needs a makeover and start rewriting your URLs, make sure that:
You keep them as short as possible (to increase usability),
You use dashes rather than underscores (to give your site a better chance of ranking as high as possible in the SERPs),
You use lowercase letters rather than uppercase ones (to avoid those case sensitive situations),
The technology you have used cannot be detected in any of your URLs (to prevent possible hacker attacks).

Tuesday, May 8, 2007

What is a text editor?

A text editor is used to edit plain text files. Text editors differ from word processors, such as Microsoft Word or WordPerfect, in that they do not add additional formatting information to documents. You might write a paper in Word, because it contains tools to change fonts, margins, and layout, but Word by default puts that formatting and layout information directly into the file, which will confuse the compiler. If you open a .doc file in a text editor, you will notice that most of the file is formatting codes. Text editors, however, do not add formatting codes, which makes it easier to compile your code.
Why should I use a text editor?
Text editors have a feature set different from that of a traditional word processing program. For example, most won't let you include pictures, or include tables, or double-space your writing. The features of text editors vary from implementation to implementation, but there are several kinds of features that most editors have. Below are listed some of the most common and useful features.

Syntax highlighting

Syntax highlighting is a very useful feature. It means that the editor will highlight certain words or types or syntax specific to a language. For example, if you have C++ highlighting turned on, the editor might make all C++ control flow keywords appear green. This makes it much easier to follow the flow of your program. As another example, the editor might have all quoted text show up as light blue. This way, if you forget to include an opening or closing quotation mark, you will quickly realize it because of the color of the text on your screen. A text editor might also indicate mismatched parentheses or brackets by turning them red; if you have a closing brace with no corresponding opening one, the color will tell you that you made a syntax error somewhere.//Here is an example of what text might look like in your editor.
//This text is colored because it is a comment.
if (x > 5)
{
//The closing parenthesis is red because it is unmatched.
x = 5 - ((3 + y) * (8 + (z / 24))));
}
Versatility
How does the editor know which words to highlight? Good question. The editor knows what language you are programming in. It does this by either having you tell it, or, like Vim, detecting the suffix of the file. If you are working on a file named code.cc, it will see the .cc and know to use C++ rules, but if you are working on one called code.html, it will apply HTML rules instead. Some editors know hundreds of languages, ranging from the commonplace (C, Java, Perl) to the truly obscure (TADS, ABAQUS). This means that you can use the same editor to program in practically any language you can think of and still enjoy the same feature and command set that you've become accustomed to.

automatic indenting

automatic indenting is probably the most useful feature of a text editor. would you rather deal with code that looks like this (taken from a fifteen-puzzle):int get_col (int tile_id)
{
/*Cycle through...*/
int i = 0, j = 0;
while (i < Dim)
{
if (board[i][j] == tile_id)
{
return i;
}
/*If you've hit the end of the row, move to the beginning of the
* next.*/
if (i == Dim-1)
{
j++;
i = 0;
/*Otherwise move to the next space in the row.*/
}
else
{
i++;
}
}
/*This is only to get rid of the warning.*/
return i;
}
or code that looks like this?:

int get_col ( int tile_id)
{
/*Cycle through...*/
int i = 0, j = 0;
while (i < Dim) {
if (board[i][j] == tile_id) {
return i;
}
/*If you've hit the end of the row, move to the beginning of the next*/
if (i == Dim-1) {
j++;
i = 0;
/*Otherwise move to the next space in the row.*/} else {
i++;
/*This is only to get rid of the warning.*/
return i;

I thought so. A text editor will spare you the trouble of having to put in all the tabs yourself by adding them automatically. This has the benefit of letting you follow the control flow through indentation, so that you can make sure you are in the right block of code as you write.

Quick navigation features

If your program is anything above trivial, you'll want to be able to move through it quickly to find certain functions, instances of certain variables, or particular lines. Text editors typically have more sophisticated movement capability than word processors. For example, say you're compiling a program and find that you have a syntax error on line 312. In Vim, all you have to do is type 312G, and the cursor will move to line 312. (How does Vim know you don't want to type the characters 312G into the document? More on that in the links at the end of the article.)


Which text editor should I use? What's the difference between them? How do I get one? How much do they cost?

There are many, many different editors available, with Vim and Emacs being two of the most popular, portable, and powerful. Most editors (Vim and Emacs included) are free, but some are shareware. I use Vim, but each editor has its adherents. For a good listing of some of the best editors available for your platform, check out this FAQ on text editors. (It's aimed at STATA users, but all the editors listed are just fine for writing C++ code.)

Saturday, May 5, 2007

What is a compiler?

A compiler is necessary to make your source code (..c, .cpp, or .cc files) into a running program. If you're just starting out, you'll need to make sure that you have one before you start programming. There are many compilers available on the internet and sold commercially in stores or online. If you have Mac OS X, Linux, or other *nix variant (such as Unix or FreeBSD), you likely have a compiler such as gcc or g++ installed already.
Compiler terminology
Compile Colloquially, to convert a source code file into an executable, but strictly speaking, compilation is an intermediate step
Link The act of taking compiled code and turning it into an executable
Build A build refers to the process of creating the end executable (what is often colloquially refered to as compilation). Tools exits to help reduce the complexity of the build process--makefiles, for instance.
Compiler Generally, compiler refers to both a compiler and a "linker"
Linker The program that generates the executable by linking
IDE Integrated Development Environment, a combination of a text editor and a compiler, such that you can compile and run your programs directly within the IDE. IDEs usually have facilities to help you quickly jump to compiler errors.
Understanding the Compilation Process
The compilation and linking process A brief description of exactly what happens when you compile a program, covering the above terms in more detail
Dealing with Compilation Errors Some suggestions for hunting down compiler and linker errors in your code
Understanding Compiler Warnings Learn what compiler warnings mean and the hows and whys of fixing them
What compilers are available?
Windows/DOS
Borland Find out how to download and set up Borland's free command-line compiler
DJGPP Read about DJGPP, a DOS-based compiler
Dev-C++ and Digital Mars Read about Dev-C++, a good windows based compiler, and Digital Mars
Windows Only
Microsoft Visual C++ Read about Visual C++
*nix
g++ is a C++ compiler that comes with most *nix distributions.
gcc is a C compiler that comes with most *nix distributions.
Macintosh
Apple's own Macintosh Programmer's Workshop is a compiler I've never used, but it is direct from apple and free.
Codewarrior My experiences with Codewarrior are limited to Java programming, though it's gotten good reviews in the past. It's a full IDE rather than just a compiler, meaning that it has a text editor and debugger integrated with the compiler so you can do all your work from one place.

Wednesday, May 2, 2007

Affiliate Program Management

Affiliate marketing programs provide a powerful and popular way for many companies to generate significant revenue from the Internet, but not every company is properly positioned for a successful program. At 10x Marketing we believe that each company ought to be accurately evaluated before engaging in an affiliate program. Our assessment service is designed to help companies know whether or not an affiliate marketing program is right for them. At the end of this assessment you'll have a good understanding of your chances of success and the risks involved with this type of Internet marketing.What is an Affiliate Program?Affiliate programs are a type of eCommerce program which generate traffic to a company’s website by convincing affiliate sites to link to them. The parent company pays each affiliate company a percentage of the sale for every online customer it gets as a result of a direct link from the affiliate’s website to its own. The affiliate company is essentially paid a commission for generating traffic, leads, or sales to the parent company’s website.
For example, a company that sells fitness gear would provide a link to the site of a fitness club that offers related (but not competing) products and services. The fitness gear company would then receive a commission of the sales gained from any buyers they direct to the fitness club. Since the fitness club gets more revenue from the new customers, both companies benefit.How Can 10x Marketing Help Me Run an Affiliate Program?10x Marketing employs an experienced Affiliate Marketing Team that excels in designing and implementing successful affiliate programs, one of several traffic generation services. Our team provides the following benefits and advantages to your company:
Set up, manage, and monitor your entire affiliate program.
Design professional ads to attract potential customers on a regular basis.
Recruit new affiliates to your program that continually drive business to you.
Help you to design your site specifically to convert those visitors into buyers of your products and services.
Keep you up-to-date by monitoring, maintaining, and reporting the success of your program on a regular basis.
In short, 10x Marketing has the experience, tools, and expertise to help you with all of your affiliate marketing needs. We also welcome new affiliates to our current programs. Give us a call. We'll be happy to answer any questions you may have.
How Do I Start?Contact us to see if Internet marketing is right for your company. There's no obligation and no better time to start generating more revenue from the Internet.

10x Marketing eBLAST Services


10x Marketing offers eBLAST (E-mail Business Leads and Sales Tool), which combines both e-mail marketing and newsletter services to provide small to large businesses with the following:
Wider client audiences
Better client relationships
Industry expert status
The opportunity to obtain and retain mind share
The chance to educate and inform clients and associates in regards to products/services
Why E-mail Marketing?The chart below compares the ROI percentages obtained via various online advertising methods. Observe that opt-in-e-mail, or e-mail marketing, comes second only to pay-per-click marketing (PPC), which is also a valuable marketing option for any business.E-mail marketing is five times more effective than direct mailing and 25 times more effective than banner ads. E-mail marketing generates immediate results, which are easily tracked and measured. Best of all, e-mail marketing is generally preferred—by both senders and recipients—to direct mail, telemarketing, radio, or TV (Source: ConstantContact.com).
10x Marketing's eBLAST newsletter services include:
eBLAST Core Services
eBLAST Stats Services
eBLAST Plus Services
Additional Services
eBLAST Core ServicesThe initial eBLAST core services work like this: 10x Marketing receives your client list, removes all duplicate entries, and validates all e-mail addresses. This list is inserted into a database. Then, you choose the HTML template of your choice, on which 10x will also provide links that will allow your clients and associates to subscribe to, forward, or unsubscribe to your newsletter. You then create a short e-mail text and 10x performs a series of test runs to make sure that every eBLAST will be sent according to plan. After the eBLAST trial-runs, editorial adjustments will be made as needed and 10x will begin to send regularly scheduled eBLASTS. All eBLAST statistics will be monitored and reports regarding these statistics will be sent to you for business related analysis.
eBLAST Stat ServicesThe eBLAST stats services include all of the eBLAST core services, plus a statistics sub-account, which provides you with a login that allows you to view your eBLAST statistics at any time. The eBLAST stats services also include a private label opt-out footer complete with your company’s logo, 5,000 credits (or e-mails) per month and an e-mail capture form on your website. The option to buy additional discounted credits is also an eBLAST stats option.
eBLAST Plus ServicesThe eBLAST plus services include the eBLAST core services plus a full access sub-account which includes a login to view real-time statistics, a login to send your own blasts (this is optional), a private label opt-out footer with your logo, 5,000 credits per month, up to 2 hours consulting and/or technical support, up to 2 hours custom design work, up to 5 hours of writing services (or 1 feature article), and an e-mail capture form on your company’s website. The option to buy additional discounted credits is also available with the eBLAST plus services package.
Additional ServicesAdditional eBLAST newsletter services include:
A custom designed e-mail template
E-mail personalization up to 25 fields
Private label opt-out footer with client’s logo
E-mail capture form on client’s website
Writing or Editing E-mail Templates
Copy Writing
Initial Story Planning
Research
Interviewing
Writing
Archiving
PDF Version
HTML Version
PDF and HTML Version
Add archives to client’s website
Questions?For more information, please contact a 10x Marketing representative.

Search Engine Marketing

While email remains the most popular activity on the Internet with about 95% of all Internet users engaging in email activities, it is important to understand that using search engines is the second most popular activity among all Internet users worldwide. About 88% of all Internet users use search engines to find what they're looking for on the Internet. There is little doubt that potential customers are currently using search engines to search for the products and services your company sells. The real question then is, "What are you doing to help them find you, instead of your competitors?"
"My media director would say that if you aren't putting money into search engines you are letting business walk out the door."
Bruce Carlisle, CEOSFInteractive
Imagine if everybody who visited your website was actively searching for the exact things you sell. What would that be worth to you? This is precisely what search engine marketing does for your business.
There are basically two ways to use search engines to your advantage. The first is to obtain free traffic or natural search results by getting your website listed among the top 10 results of specific searches. The second way to use search engines to your advantage is to buy your way to the front page of specific searches. This strategy is called pay-per-click or sometimes cost-per-click.
If you know how to use search engines to your advantage, you can get your website listed on the first page of a given search and redirect pre-qualified visitors to your website on a regular basis.
The result of this effort, if done properly, will be more visitors, more buyers, and more revenue for your company from the Internet. Contact 10x Marketing today and begin taking advantage of this highly profitable way to improve your bottom line.
How Do I Start?Contact us to see if Internet marketing is right for your company. There's no obligation and no better time to start generating more revenue from the Internet.

Tuesday, May 1, 2007

Solving Cool Problems with Genetic Algorithms

representable as strings (hence the name Genetic Algorithm - the programming model is based on DNA). In terms of practical value, genetic algorithms are useful for solving problems in which the solutions are difficult to find by following a specific algorithm designed to solve the problem (using genetic algorithms in place of predesigned algorithms such as Djikstra's algorithm for path finding just wouldn't make sense). It functions as a sort of systematized brute force approach. Problems genetic algorithms are valuable for solving include scheduling problems, constraint satisfaction problems, and other problems that require searching a large number of possibilities. Genetic algorithms can be applied to protein folding or even tuning Linux kernel performance.
A simpler example just to get the point across is finding a five digit number that acts as the best solution to an expression; for example, if you wish to find the number that makes the expression x^2+2x-11 equal to 0, you could of course use brute force to solve the equation, but a genetic algorithm can also be used, and if you have a very complex expression, it may be of great value to use a genetic algorithm, especially when one considers the time saved over brute force. In a sense, all genetic algorithm problems boil down to solving complex expressions or sets of expressions, as all problems are representable in that fashion. Genetic algorithms work from the same basis as evolutionary theory. A genetic algorithm has several components: a pool of solutions, a method of evaluating the effectiveness of each solution, a breeding function that combines the best solutions into new solutions, and a mutation function. The pool of solutions do not compete for resources; rather, each solution is tested by an evaluation function (called the "fitness" function), which then gives it a ranking based on its effectiveness at solving the problem compared to the other solutions. The best solution strings are the ones that are ranked highest (that are the most "fit"); the breeding function takes two of the better performing solutions and combines them together into a new solution. The breeding function should repeat the process of randomly selecting two solutions and breeding them; the better performing functions should be given the higher percentage chance of being selected. The breeding function generally works by taking slices of each solution and splicing them together into a new one. Solutions are often represented as strings, so generally, a breeding function will take fragments of random lengths from each string and concatenate them together to form a new string. Each fragment should be placed into the location in the new string that corresponds to its location in the old string. For example, if a string fragment is from positions 5 to 8 in the first string being bred, it should be placed into positions 5 through 8 in the new child solution. After the strings have been bred, and the set of potential solutions has been refilled, it is important to have the mutation function. The mutation function is important because it introduces an element of randomness that allows variation in the solution sets, which otherwise would stagnate and have no advantage over a hand-crafted solution. Mutations may diminish the strength of some solutions, but in general it will increase the overall value of the solution set; by including a very small mutation rate, you introduce new traits that might never have otherwise existed within the pool. This allows you to explore a larger group of possibilities and avoid stagnation. In fact, many other AI techniques forgo the idea of breeding solutions and work simply by making small "mutations" or changes to a potential solution to a problem. Genetic algorithms can do some amazing things and solve very complex problems. Nevertheless, this techniques will require having way of evaluating possible solutions -- this is one of the most difficult problems with genetic algorithms. The second challenge is finding a good way to represent solutions to the problem as strings. Once these are sorted out, a genetic algorithm may be a good approach to your problem.

Monday, April 30, 2007

Binary Trees: Part 1

The binary tree is a fundamental data structure used in computer science. The binary tree is a useful data structure for rapidly storing sorted data and rapidly retrieving stored data. A binary tree is composed of parent nodes, or leaves, each of which stores data and also links to up to two other child nodes (leaves) which can be visualized spatially as below the first node with one placed to the left and with one placed to the right. It is the relationship between the leaves linked to and the linking leaf, also known as the parent node, which makes the binary tree such an efficient data structure. It is the leaf on the left which has a lesser key value (ie, the value used to search for a leaf in the tree), and it is the leaf on the right which has an equal or greater key value. As a result, the leaves on the farthest left of the tree have the lowest values, whereas the leaves on the right of the tree have the greatest values. More importantly, as each leaf connects to two other leaves, it is the beginning of a new, smaller, binary tree. Due to this nature, it is possible to easily access and insert data in a binary tree using search and insert functions recursively called on successive leaves.
The typical graphical representation of a binary tree is essentially that of an upside down tree. It begins with a root node, which contains the original key value. The root node has two child nodes; each child node might have its own child nodes. Ideally, the tree would be structured so that it is a perfectly balanced tree, with each node having the same number of child nodes to its left and to its right. A perfectly balanced tree allows for the fastest average insertion of data or retrieval of data. The worst case scenario is a tree in which each node only has one child node, so it becomes as if it were a linked list in terms of speed. The typical representation of a binary tree looks like the following:
10
/ \
6 14
/ \ / \
5 8 11 18The node storing the 10, represented here merely as 10, is the root node, linking to the left and right child nodes, with the left node storing a lower value than the parent node, and the node on the right storing a greater value than the parent node. Notice that if one removed the root node and the right child nodes, that the node storing the value 6 would be the equivalent a new, smaller, binary tree.The structure of a binary tree makes the insertion and search functions simple to implement using recursion. In fact, the two insertion and search functions are also both very similar. To insert data into a binary tree involves a function searching for an unused node in the proper position in the tree in which to insert the key value. The insert function is generally a recursive function that continues moving down the levels of a binary tree until there is an unused leaf in a position which follows the rules of placing nodes. The rules are that a lower value should be to the left of the node, and a greater or equal value should be to the right. Following the rules, an insert function should check each node to see if it is empty, if so, it would insert the data to be stored along with the key value (in most implementations, an empty node will simply be a NULL pointer from a parent node, so the function would also have to create the node). If the node is filled already, the insert function should check to see if the key value to be inserted is less than the key value of the current node, and if so, the insert function should be recursively called on the left child node, or if the key value to be inserted is greater than or equal to the key value of the current node the insert function should be recursively called on the right child node. The search function works along a similar fashion. It should check to see if the key value of the current node is the value to be searched. If not, it should check to see if the value to be searched for is less than the value of the node, in which case it should be recursively called on the left child node, or if it is greater than the value of the node, it should be recursively called on the right child node. Of course, it is also necessary to check to ensure that the left or right child node actually exists before calling the function on the node.Because binary trees have log (base 2) n layers, the average search time for a binary tree is log (base 2) n. To fill an entire binary tree, sorted, takes roughly log (base 2) n * n. Lets take a look at the necessary code for a simple implementation of a binary tree. First, it is necessary to have a struct, or class, defined as a node.struct node
{
int key_value;
struct node *left;
struct node *right;
};
The struct has the ability to store the key_value and contains the two child nodes which define the node as part of a tree. In fact, the node itself is very similar to the node in a linked list. A basic knowledge of the code for a linked list will be very helpful in understanding the techniques of binary trees. Essentially, pointers are necessary to allow the arbitrary creation of new nodes in the tree.There are several important operations on binary trees, including inserting elmeents, searching for elements, removing elements, and deleting the tree. We'll look at three of those four operations in this tutorial, leaving removing elements for later. We'll also need to keep track of the root node of the binary tree, which will give us access to the rest of the data: struct node *root = 0;
It is necessary to initialize root to 0 for the other functions to be able to recognize that the tree does not yet exist. The destroy_tree shown below which will actually free all of the nodes of in the tree stored under the node leaf: tree. void destroy_tree(struct node *leaf)
{
if( leaf != 0 )
{
destroy_tree(leaf->left);
destroy_tree(leaf->right);
free( leaf );
}
}The function destroy_tree goes to the bottom of each part of the tree, that is, searching while there is a non-null node, deletes that leaf, and then it works its way back up. The function deletes the leftmost node, then the right child node from the leftmost node's parent node, then it deletes the parent node, then works its way back to deleting the other child node of the parent of the node it just deleted, and it continues this deletion working its way up to the node of the tree upon which delete_tree was originally called. In the example tree above, the order of deletion of nodes would be 5 8 6 11 18 14 10. Note that it is necessary to delete all the child nodes to avoid wasting memory. The following insert function will create a new tree if necessary; it relies on pointers to pointers in order to handle the case of a non-existent tree (the root pointing to NULL). In particular, by taking a pointer to a pointer, it is possible to allocate memory if the root pointer is NULL. insert(int key, struct node **leaf)
{
if( *leaf == 0 )
{
*leaf = malloc( sizeof( struct node ) );
leaf->left->key_value = key;
/* initialize the children to null */
leaf->left->left = 0;
leaf->left->right = 0;
}
else if(key < (*leaf)->key_value)
{
insert( key, (*leaf)->left );
}
else if(key > (*leaf)->key_value)
{
insert( key, (*leaf)->left );
}
}The insert function searches, moving down the tree of children nodes, following the prescribed rules, left for a lower value to be inserted and right for a greater value, until it reaches a NULL node--an empty node--which it allocates memory for and initializes with the key value while setting the new node's child node pointers to NULL. After creating the new node, the insert function will no longer call itself. Note, also, that if the element is already in the tree, it will not be added twice. struct node *search(int key, struct node *leaf)
{
if( leaf != 0 )
{
if(key==leaf->key_value)
{
return leaf;
}
else if(keykey_value)
{
return search(key, leaf->left);
}
else
{
return search(key, leaf->right);
}
}
else return 0;
}The search function shown above recursively moves down the tree until it either reaches a node with a key value equal to the value for which the function is searching or until the function reaches an uninitialized node, meaning that the value being searched for is not stored in the binary tree. It returns a pointer to the node to the previous instance of the function which called it.

Sunday, April 29, 2007

Debugging Strategies, Tips, and Gotchas

Debugging can be tedious and painful if you don't set up your programs to help you debug them. In the spirit of "an apple a day keeps the doctor away", this article suggests approaches to writing code that's more debuggable, how to catch problems before they start, and gives you some time-wasting gotchas to watch out for and gives you some gotchas to watch out for.
Use the Right ToolsIt should go without saying that you should always be using the best tools available; if you're hunting a segmentation fault, you want use a debugger. Anything less than that is unnecessary pain. If you're dealing with bizarre memory issues (or hard-to-diagnose segfaults), use Valgrind on Linux or Purify for Windows.
Debug the ProblemMy first instinct when debugging is to ask, "is my code too complicated?" Sometimes we'll all come up with a solution to a problem only to realize that the solution is really hard to get working. So hard, in fact, that it might be easier to solve the original problem in another way. When I see someone struggling to debug a complex mass of code, my first thought is to ask whether there's a cleaner solution. Often, once you've written bad code, you have a much better idea of what the good code should look like. Remember that just because you've written it doesn't mean you should keep it! The trick is always to decide if you're trying to solve the original problem or to solve a particular choice of solution. If it's the solution, then it's possible that your problems don't stem from the problem at all--maybe you're over-thinking the problem or trying a wrong-headed approach. For instance, I recently needed to parse a file and import some of the data into an access database to prototype an analysis tool. My first instinct was to write a Ruby script that interfaced directly with Access and inserted all of the data into the database using SQL queries. As I looked at the support for doing this in Ruby, I quickly realized that my "solution" to the problem was going to take a lot longer than the problem should have taken. I reversed course, wrote a script that just output a comma-separated value file, and had my data fully imported in about an hour.
An Aside on Bad CodePeople are often reluctant to throw out bad code that they've written and re-write it. One reason is that code that's written feels like completed work, and throwing it out feels like going backward. But when you're debugging, rewriting the code can seem more appealing because you're probably saving yourself time spent debugging by spending a bit more time coding. The trick is to avoid throwing out the baby with the bath water--remove the bad code, don't start the whole program over again (unless it's rotten to the core). Rewrite only the parts that really need it. Your second draft will probably be both cleaner and less buggy than the first, and you may avoid issues like having to go back later and rewrite the code just so that you can figure out how it was supposed to work. On the other hand, when you're absolutely sure that code that looks horrible is the right code to use, you'll want to explain your rationale in a comment so someone (or you) doesn't come back later and hack it apart.
Minimize Potential Problems by Avoiding Copy-Paste SyndromeNothing is more frustrating than to realize that you're debugging the same problem multiple times. Whenever you copy and paste large chunks of code, you leave yourself open to the unknown demons inhabiting that code. If you haven't debugged it yet, odds are that you're going to have to. And if you forgot that you copied that code somewhere else, you're probably going to be debugging the same code more than once. (There are other reasons to avoid copy-paste syndrome; even worse than debugging the same code twice is finding the bug in only one piece of copy-pasted code.) The best way to avoid copy-paste syndrome is to use functions to encapsulate as much of your repeat code as possible. Some things can't easily be avoided in C++; you're going to write a lot of for loops no matter what you're doing, so you can't abstract away the whole looping process. But if you have the same loop body in multiple places, that might be a sign that it should be pulled into a separate function. As a bonus, this makes other future changes to your code easier and allows you to reuse the function without having to find a chunk of code to copy.
When to Copy CodeAlthough copying code is usually dangerous, there are times when it may be the best choice. For instance, if you need to make small, irregular tweaks to a chunk of code, but the bulk of it needs to remain the same, then copying, pasting, and careful editing might make sense. By copying the code, you avoid the chance that you introduce new bugs by mistyping the code. It should go without saying that you should have carefully debugged the code you plan to copy before you do so! (But I said it, and I'm not even paid by the word.) The second reason to copy code is when you have long variable names and a bad text editor. The best solution is generally to get a better text editor with keyword completion.
Make Big Problems Found Late Small Problems Found Early
Testing EarlyOne advantage of pulling out code and putting it into functions is that you can then separately test those functions. This means that you can sometimes avoid debugging big problems caused by simple bugs in the original functions. Nothing is more frustrating than writing perfectly correct code given how you thought a function (or a class) worked, only to find out that it doesn't work that way. This kind of unit testing requires some discipline and a good sense of what can go wrong with your code. Another advantage of early testing--especially if you write some or all of your tests up-front, before the code--is that you'll pay more attention to the specific interface to your class. If you can't test error handling because you're using an assert instead of an exception or error code, that might be an indication that you should be using some form of error reporting rather than asserts. (Of course, this won't always be the case--there are times when you just want to verify that your asserts work correctly.) Beyond error-reporting, writing tests is the first time you can test your code's interface, which is often as valuable as testing that the code works. If the interface to your class is clunky, or your functions have impossible-to-understand, let alone remember, argument lists, it might be time to rethink what you're doing before you write the underlying code.
Compiler WarningsMany potential bugs can be caught by your compiler. Some such errors include using uninitialized variables, accidentally replacing a check for equality with an assignment in a conditional, or, in C++, errors related to mixing types such as pointers and ints. Since this has been covered before, I suggest checking out the article why you should pay attention to compiler warnings.
Printf LiesBecause I/O is usually buffered by the operating system, using printfs in your debugging process is risky. When possible, use a debugger to figure out what lines of code are the problem rather than narrowing in on the issue with code littered by printfs and cout. (And beware the stray printf that slips in during debugging and, ahem, slips into the final version.)
Flush OutputNevertheless, there are times when you actually need to keep track of some state in a log file--perhaps you simply have too much data that you need to collect, and you need the data from program start-up to the moment the bug occurs. To ensure you collect all of the data, be sure to flush it: you can use fflush in C, or output an endl in C++. fflush takes the FILE pointer you are writing into; for instance, to flush stderr, you would write fflush( stderr);
Check Your Helper FunctionsThis should be obvious, but it's easy to forget in the heat of the moment: always verify that your helper functions work, especially when seemingly simple code is failing. When possible, isolate each helper function and test it individually, then test each of its helper functions. There's nothing more frustrating than realizing that your original logic was right, but your assumption about a helper function was wrong.
When Cause Doesn't Immediately Lead to EffectEven if a helper function doesn't seem to be the immediate source of a problem, its side effects may cause latent problems. For instance, if you have a helper function that can return NULL and you pass its output into a library function dealing with C-strings, you may see the immediate cause as dereferencing a NULL pointer in strcat, but the real cause was the buggy function you wrote earlier (or the fact that you didn't check for NULL after calling it).
Remember That Code May Be Used in More Than One PlaceAnother problem that can come up when debugging is that you discover the problem appears to be the result of a particular function call, set a break point inside that function, and then discover that there are hundreds of calls to the same function throughout the code. Or worse, you don't notice this until wasting hours of time trying to figure out what's going on or thinking that the reason for the problem is that the function is being called incorrectly. (When, in fact, it's being called correctly but with different arguments than the point at which the bug occurred.) The most obvious solution is to check the call stack after hitting a break point or to set the breakpoint right before the call that is actually the problem. Unfortunately, this doesn't always help if the same call works thousands of times but fails on the 1001st call. Potential solutions include counting the number of calls to a function and then stepping through that many breakpoints set inside the function, or using a static variable as a counter.

Saturday, April 28, 2007

Makefiles

Makefiles are something of an arcane topic--one joke goes that there is only one makefile in the world and that all other makefiles are merely extensions of it. I assure you, however, that this is not true; I have written my own makefiles from time to time. In this article, I'll explain exactly how you can do it too!
Understanding Make -- BackgroundIf you've used make before, you can safely skip this section, which contains a bit of background on using make. A makefile is simply a way of associating short names, called targets, with a series of commands to execute when the action is requested. For instance, a common makefile target is "clean," which generally performs actions that clean up after the compiler--removing object files and the resulting executable. Make, when invoked from the command line, reads a makefile for its configuration. If not specified by the user, make will default to reading the file "Makefile" in the current directory. Generally, make is either invoked alone, which results in the default target, or with an explicit target. (In all of the below examples, % will be used to indicate the prompt.) To execute the default target: % make
to execute a particular target, such as clean: % make clean
Besides giving you short build commands, make can check the timestamps on files and determine which ones need to be recompiled; we'll look at this in more detail in the section on targets and dependencies. Just be aware that by using make, you can considerably reduce the number of times you recompile.
Elements of a MakefileMost makefiles have at least two basic components: macros and target definitions. Macros are useful in the same way constants are: they allow you to quickly change major facets of your program that appear in multiple places. For instance, you can create a macro to substitute the name of your compiler. Then if you move from using gcc to another compiler, you can quickly change your builds with only a one-line change.
CommentsNote that it's possible to include comments in makefiles: simply preface a comment with a pound sign, #, and the rest of the line will be ignored.
MacrosMacros are written in a simple x=y form. For instance, to set your C compiler to gcc, you might write: CC=gcc
To actually convert a macro into its value in a target, you simply enclose it within $(): for instance, to convert CC into the name of the compiler: $(CC) a_source_file.c
might expand to gcc a_source_file.c
It is possible to specify one macro in terms of another; for instance, you could have a macro for the compiler options, OPT, and the compiler, CC, combined into a compile-command, COMP: COMP = $(CC) $(OPT)
There are some macros that are specified by default; you can list them by typing % make -p
For instance, CC defaults to the cc compiler. Note that any environment variables that you have set will be imported as macros into your makefile (and will override the defaults).
TargetsTargets are the heart of what a makefile does: they convert a command-line input into a series of actions. For instance, the "make clean" command tells make to execute the code that follows the "clean" target. Targets have three components: the name of the target, the dependencies of the target, and finally the actions associated with the target: target: [dependencies]


...
Note that each command must be proceeded by a tab (yes, a tab, not four, or eight, spaces). Be sure to prevent your text editor from expanding the tabs! The dependencies associated with a target are either other targets or files themselves. If they're files, then the target commands will only be executed if any of the dependent files have changed since the last time the command was executed. If the dependency is another target, then that target's commands will be evaluated in the same way. A simple command might have no dependencies if you want it to execute all the time. For example, "clean" might look like this: clean:
rm -f *.o core
On the other hand, if you have a command to compile a program, you probably want to make the compilation depend on the source files to compile. This might result in a makefile that looks like this: CC = gcc
FILES = in_one.c in_two.c
OUT_EXE = out_executable
build: $(FILES)
$(CC) -o $(OUT_EXE) $(FILES)
Now when you type "make build" if the dependencies in_one.c and in_two.c are older than the object files created, then make will reply that there is "nothing to be done." Note that this can be problematic if you leave out a dependency! If this were an issue, one option would be to include a target to force a rebuild. This would depend on both the "clean" target and the build target (in that order). The above sample file could be amended to include this: CC = gcc
FILES = in_one.c in_two.c
OUT_EXE = out_executable
build: $(FILES)
$(CC) -o $(OUT_EXE) $(FILES)
clean:
rm -f *.o core
rebuild: clean build
Now when rebuild is the target, make will first execute the commands associated with clean and then those associated with build.
When Targets FailWhen a target is executed, it returns a status based on whether or not it was successful--if a target fails, then make will not execute any targets that depend on it. For instance, in the above example, if "clean" fails, then rebuild will not execute the "build" target. Unfortunately, this might happen if there is no core file to remove. Fortunately, this problem can be solved easily enough by including a minus sign in front of the command whose status should be ignored: clean:
-rm -f *.o core
The Default TargetTyping "make" alone should generally result in some kind of reasonable behavior. When you type "make" without specifying a target in the corresponding makefile, it will simply execute the first target in the makefile. Note that in the above example, the "build" target was placed above the "clean" target--this is more reasonable (and intuitive) behavior than removing the results of a build when the user types "make"!
Reading Someone Else's MakefileI hope that this document is enough to get you started using simple makefiles that help to automate chores or maintain someone else's work. The trick to understanding makefiles is simply to understand all of your compiler's flags--much (though not all) of the crypticness associated with makefiles is simply that they use macros that strip some of the context from an otherwise comprehensible compiler command. Your compiler's documentation can help enormously here. The second thing to remember is that when you invoke make, it will expand all of the macros for you--just by running make, it's very easy to see exactly what it will be doing. This can be tremendously helpful in figuring out a cryptic command.

Thursday, April 26, 2007

21 Ways To Promote Your Website - Part Three by Neil Stafford

In this last installment of the series, we'll take a look at the remaining seven ways you can promote your website.
Please note that the 21 ways we've looked at are by no means exhaustive and should be used to help you decide which way(s) you prefer and are comfortable using.
With that thought, let's move on to the final seven.
15) Reciprocal Links
This involves the swapping of links with other websites usually within your niche market; however, I've seen many reciprocal links on sites that aren't related.
Reciprocal link exchanges were originally used to build up your link popularity rating with the search engines, although over the last few years this has become less effective as Search Engine technology advances.
However, don't discount reciprocal links just yet.
On several of our sites, we still seek out high ranking and high traffic websites where we can both benefit from a reciprocal link exchange.
In this case, I'm not doing it for the benefit of search engine ranking but for the pure reason of traffic generation.
A link in a prominent place on a high traffic site will, by its very nature, generate traffic for you. And if you have your site set up correctly you should be able to capture the visitors name and email address.
16) Search Engine Listings or SEO
Let's get one thing straight - I am not a search engine expert by any stretch of the imagination. In fact I heard the best definition of SEO earlier last year when it was called...
"Search Engine Optimist"
However, I do understand the principles and do put in place strategies to take advantage of the natural search listings.
The easiest way is to add content to your website and link this from an index page or site map on your website.
I run many websites that are single sales letter types sites and many of them have a page rank of 3, 4 or 5.
However, behind these pages are several dozen content pages that are linked via an index page. By checking my site statistics I can see which pages are driving traffic and in many cases new sales.
When setting up a new sales letter site, I'll use PPC traffic to gauge how well it will perform and for the ones that do really well I'll spend the time adding content pages to sit behind the main site.
17) Classified Adverts
I'm not talking about classified adverts online but simple adverts in your niche market's magazines and publications.
This simple strategy has made us thousands in various niche markets and has even had the magazine editorial team contact us to see if we would like to contribute to the magazine itself.
In specialist magazines you can often place a classified ad for only a few pounds or dollars and know that it will be reaching your target audience.
The idea of these adverts is not to sell your product off the page but to drive the readers to your website and preferably a name capture page.
To do this, your advert must contain a benefit to the reader to put down the magazine, go online and type in your web address.
18) Your Own Business Stationary
If you operate in many different niche markets and are only selling digital products then this may not be ideal for you, however, if you have a small number of markets then at a minimum I'd have business cards printed with your web address on them.
In our main niche markets, we have business cards that have an offer and call to action on the back to encourage people to visit our sites.
We also have letterheads, and if we send out a physical product, we enclose an insert with the branding and an offer or another call to action for the customer to take.
19) Email Campaigns
If people are already on your email list this shouldn't be the end of your traffic driving process. By making sub lists of your main list you can then send targeted messages to drive your customer and prospects to new sales pages and offers.
20) Forums
You may already know my view of forums within the Internet Marketing arena.
However, forums in niche markets are still an ideal way to drive traffic to your website. However, PLEASE don't go and blatantly promote your site on the forums...there are rules to follow.
First of all, find the forums in your niche and spend a bit of time 'lurking' reading the posts and watching how they are answered.
After a short time, you'll get the feel of the board and if they allow any promotion of websites or if you can use a signature file at the bottom of your post.
Once you understand the rules start answering some of the questions on the board, and leave your URL and/or signature. file at the end.
My own view with non marketing forums is not to try and answer all the questions; answer only a few and answer them completely with very good advice or suggestions.
This will get you noticed more and build up credibility with the other people on the board.
21) Conference Calls
Conference calls are an ideal way to build up an email list quickly and are an excellent relationship-building tool as well.
You can either have a free to join call or have attendees pay a fee to join you. Either way I'd approach other 'players' within your market and ask them would they promote your call for you. With a paid call you can offer an affiliate deal.
The call should be on a specific topic and during the call you can make reference to several pages on your site or make a specific sales offer for attendees.
With a free call you can make the MP3 recording available afterwards encouraging your attendees to tell others about it. This will create a viral effect as the call gets passed around and in turn drive traffic back to your website.
I have an mp3 that I recorded more than 3 years ago that still drives traffic to one of my sites each and every week!
Summary
So there you have it, the conclusion of '21 Ways To Promote Your Website'. Which ones will you implement into your business?
Running a successful web business is simple, but not easy. If it was easy everyone would be doing it.
However, it is simple...
What could be simpler than having a product that people are actively looking for and letting them know where they can get it from?
About the Author
Neil Stafford is Editor and Publisher of the Internet Marketing Review the UK's longest running PRINTED Internet Marketing Newsletter. 'Test drive' the Newsletter for FREE - Visit this special web page for more information: http://www.InternetMarketingReview.com/sya