This whopper of an article evolved from a short conversation I had with some fellow geeks at a blogger meetup in London. We discussed the trends in the startup world and somebody said it would be cool to build a search engine. We did laugh at him, I have to admit! However, it got me thinking as I was driving back home.
…or skip to the next section >>
Well, you’d think that in order for somebody to try and compete with Google, with the Big G, he would have to be out of his mind or be eating the wrong kind of mushroom. Yes and no. If you’re serious about business, you have to consider your competition a motivation. Those who are deciding against starting their own business ONLY because there is a big competitor in the marketplace, got the wrong mindset! We know how easy it is to go from hero to zero – look at Readers Digest or Motorola to name a few…
Although Google is big, it’s not the Internet. It’s only 62% of the Internet. Knowing a bit of psychology you can actually understand that people looking to make a dent in Google’s domination, are not that crazy at all. They’re simply using two of the most basic psychological principles: the Goliath Syndrome and Fear. Fear is powerful, it can rule your mind. In this case I’m not talking about Googlephobia (if there is such a thing) – what I meant was a mild case of Persecutory Delusion.
A fear of being watched is totally organic for everyone and unless it goes out of control, it’s not something you need to see your doctor for. We all know that Google is watching us (tracking the searches we perform, measuring our online behaviour etc). Type in “water mattresses” in Google search and the subsequent websites you visit will display ads trying to sell you mattresses. It’s not a nice feeling. It raises a question about privacy and anonymity. Although Google (and other big brands that track your online actions) don’t do anything illegal, the overall concept of being tracked is very unsettling for most of us.
Going back to the Goliath Syndrome that I mentioned earlier, it’s something that Silvester Stallone has explored to the fullest. The moment Rocky lies down on the floor with a bloody face and the referee starts the countdown, you just subconsciously want him to get back up. You will unintentionally root for the underdog. This especially applies to the corporate world. Whenever a smaller company or a journalist takes on a mogul or an industry giant, we just go: “yee-haw, you show them!!!”
You cannot really envisage somebody taking part in the “Occupy!” movement and then going back home and using Google to search for stuff. He’ll be using blekko or DuckDuckGo and it’s highly likely that the operational system of their machine wasn’t made by either Apple or Microsoft.
So, what alternatives do we currently have?
…or skip to the next section >>
Both blekko and DuckDuckGo receive good press because they are on the opposite scale to Google when it comes to privacy. We all know that Google tracks almost every move that we make. Blekko and DuckDuckGo don’t. I consider it a niche approach but it has a potential to expand.
The flaw of this niche model is that the people who have decided to shun Google because of the privacy issues, are not the buying kind. A search engine needs the buying kind to visit them in order to make ends meet. A search engine cannot exist on an altruistic basis because sooner or later they will have to monetize their traffic. Here’s an interesting article if you want to learn more about DDG.
Don’t get me wrong, it is possible to monetize traffic without tracking people. Absolutely! If I search for butterfly nets and you serve me relevant commercial results in a clearly defined “sponsored” section that doesn’t interfere with my organic experience, there’s no problem with that. I don’t want you to start peddling me those butterfly nets after I’ve left your search engine.
I’m not even supposed to like DuckDuckGo because they don’t transmit the keyword information. Although, after the recent Google’s “keyword not-provided” stunt, the two engines are equally SEO-unfriendly. I think they’re taking the whole privacy thing a step too far. Theoretically, if your computer has a unique IP address (which is unlikely), I can identify you when a search keyword is transmitted to my webstats software. But then, what is the likelihood of me wanting to identify you in the first place? All I want to know as an SEO – what keywords were used to find my site.
But I still want DDG to do well in order to see somebody levelling the playing field and setting a precedent to encourage other people. However, in the long run, I don’t believe that these superprivacy search engines will last. We need to reach the golden mean that would keep the visitor happy and keep the online businesses alive. I want DDG to do well even though they often rank sites with “untrustworthy” WOTs above the “trustworthy” ones.
As far as geo-targeting is concerned, DDG is getting there. All I have to do is to select my region and it will serve the results accordingly. When I say it’s getting there, I mean its current region target is on a country level only.
A crucial part of success for DDG is going to be getting other software providers to select it as the default search engine. (Here’s a cheeky tip on how to set it as default for your Chrome browser)
blekko’s privacy policy is more SEO-friendly and business-friendly. They give you full control over your privacy. If you’re happy to go with their default privacy setup, they will keep your search behaviour for 4 hours and then delete it. IP addresses are kept for 48 hours. That’s a much better-optimised model. They have a good business model and they know where they’re going.
Not sure about the relevancy, though. All this result categorisation business with colours and related categories. At this moment I couldn’t switch to using blekko as my only search engine. On a side-note, the results are pretty relevant if you don’t care about geo-targeting. Can I really buy cars from Australia if I’m located in the UK? No, blekko, no! Then why are you serving me those Australian results?
Nevertheless blekko has 12.5 million users and they’re serving 5 million results every day. Although that probably suggests they only have a few million active users, it’s still a great achievement and I hope they find a way to increase their user base.
From the SEO perspective, by default, blekko isn’t recognised as a search engine by your web tracking software. If you use Google Analytics, there is a workaround – filters that you can apply from your Admin -> Filters panel (see the Option 2 in the referenced article).
The Facebook Graph Search has all the prerequisites to be amazing but there’s an issue.
I know that Facebook is not trying to be a search engine but assuming they anticipate it, here’s the problem they have: people visit Facebook for reasons very different than gaining information or making a purchasing decision.
People use social media to have a good time, stalk their ex, boast about the pizza they’ve just had, post a Lolcat pic and so on. They don’t come to Facebook to get answers. If they ever want to integrate a proper search engine and get people to use Facebook as a destination for answers, they will have to employ somebody like Robert Cialdini and turn the whole concept inside out. Will that spook away a certain portion of its current users? Possibly.
However, with so many teenagers dumping Facebook, they might arrive to a very different model naturally. Besides, organic search is a mature man’s territory. The younger generation of web users rather take the social media route for discovering things. If Facebook ever fancies a shy at a more “searchy” approach, the fact that organic search was 6% down in 2013 compared to 2012 is really playing into their hands!
Ok, ok, I admit, this is purely speculative, however, if there’s anybody out there powerful and innovative enough to take on Google and Bing, it’s Apple for sure!
Now, Yahoo! once used to be the world’s most popular website. Now it’s far from that. It’s been serving a mirror of Bing search results since 2010 but Yahoo!’s new CEO Marissa Mayer has indicated that she’s not happy with the partnership because she’s not seeing that Bing is committed to increasing the market share. But it’s not this fact that tells me something might be brewing in Sunnyvale. It’s actually their latest acquisition – the Intelligent Homescreen. It’s an app that’s “stalking” your every move and then bringing up information on your smartphone screen at the moment that it’s relevant. Isn’t that what we ideally want from a future search engine? Mayer apparently thinks the homescreen could be used to deliver intelligent info on more devices than just a smartphone. Either I’m imagining things or this is really exciting.
Earlier in 2013 Yandex overtook Bing to become the 4th most popular search engine in the world.
This is what the balance of power looks like right now
Yandex basically has two divisions – Yandex.ru, launched in 1997, which is #1 most visited website in Russia (#20 worldwide) and Yandex.com – the English version that was launched in 2011. Since then it has slowly but steadily climbed up the ranks. Yes, I will have to admit that this is mostly thanks to the ever-strong intelligent and slightly rebellious Russian diasporas living in many (all) countries of the world. Yandex.com is currently within Top 5000 most visited sites in as many as 39 countries, including Thailand, Philippines and Turkey – places where there are very few Russian people. Yandex went public in 2011 and is the 9th biggest internet IPO in history.
Can Yandex become a major player in the global search market? Absolutely! They have no issues with relevancy, they display 20 results on the first page by default, the interface is very simple and search-centred and it has an alternative to Webmaster Tools. My current issue with Yandex.com is geo-targetting. If you search for products that you cannot use unless you’re in the same country with the business (like finance or florists) their results aren’t very helpful. It doesn’t even start geo-targetting when I’m logged in. Yandex.ru is much better at understanding the searcher’s location. So why is the .com version still lacking?
On a more positive note, they’re beta-testing a service called Islands. It’s an interactive search facility that is supposed to deliver a search goal instead of just returning a list of often-irrelevant results. From a webmaster’s perspective it’s worth pointing out that the Islands support both the Open Graph Protocol and Schema.org markup.
Well, at least the black hat part of it. That’s why Yandex have announced an exciting experiment. Starting from January 2014 they are going to drop all external link-related ranking factors for commercial searches within the Moscow region.
Currently Yandex uses 800 various factors in ranking pages. Around 50 of them are based on external links. You’d think that 750 ranking factors are enough to gauge the quality and relevancy of web pages. Exactly! It might be surprising but social signals aren’t going to be part of the ranking algo either. Yandex have confirmed that although they have access to Facebook and Twitter data they haven’t found social signals at all helpful. Huh? Really?
What they have, however, found helpful is user behaviour signals. It is said that Yandex have created a sort of a “template” of what an ideal page looks like and now it compares your page with their ideal template and if it matches, you’ll be pretty close to the top. Obviously, this is a very simplistic way of explaining things. There are hundreds of other factors and the “template” differs according to the user’s intent and the topic of the search term.
I’m excited. And I shouldn’t really be excited because link acquisition for clients is one of the ways I generate income. Nevertheless, I’ve always thought that link-based ranking was stupid. Extremely stupid. Take the PageRank concept out of the sterile library world and it can be manipulated. I’m not even talking about the no-no methods that Google lists in its Webmaster Guidelines.
Let’s pretend all the world’s blackhat SEOs die under strange circumstances, so exchanging money or other incentives for links is now impossible. Compare two websites – both have equally high-quality content and an identical level of user experience (I know it’s technically impossible but then what’s the likelihood of all blackhats dying a strange death) one is run by a one-man orchestra and the other one is run by a large company employing 10 PR/outreach specialists. Which one is going to acquire more links?
Yandex is hitting the nail right on the head, and if the experiment succeeds and is rolled out globally, this is going to be the biggest game-changer in the SEO world after the Google’s black-and-white animal “tragedy.”
I couldn’t possibly skip Bing. Although it’s not a startup and not new in the field of search engines, Bing is probably the only alternative to Google that can currently be used as somebody’s only search engine.
From a searcher’s perspective, Bing has got several advantages.
Three things where Bing lacks are:
And the main reason why the little guys can have a chance in the long term is that…
…or skip to the next section >>
I’m not having a go at Google. In fact, it may seem funny but this article is not about Google 🙂 The way it is now, Google may be the best search engine out there: they’re delivering the results, they’re QUICK, they’ve honed the user experience, but there are also some things that they do wrong thus leaving the door open for a competitor. Because of the corporate mindset, I don’t see them tackling these issues any time soon.
Currently, unless you’re really savvy with your cookie management, whenever you search for something on Google, you will be served relevant (or annoying – delete as appropriate) ads whenever you visit a consecutive site that takes part in Google’s ad network. A few years ago, these ad networks would have shown you advertisements directly related to the content that they were surrounded by. This old model was better the site owner and you could argue – less annoying for the visitor. The chap who was dissatisfied was the advertiser, so Google decided to change the rules in favour of the advertiser. Is that legal? In many cases yes. Although the EU is apparently fed up and it’s taking Google to court on charges related to Privacy Policy regulation breaches. Looks like the European regulators are in for a lengthy battle. Don Quixote springs to mind…
The job of a search engine is to serve results. Just like it was with the good old AltaVista. It was a drastically minimalist approach with just the search box and result listings. Both Google and Ask.com are getting this wrong by providing instant answers to common questions like “What’s the weather in Denver?”, “How much is 40 pounds in dollars”, “what is demiurge” and lately “credit cards.”
So what is there for the dictionaries, calculator sites, weather portals and credit card comparison engines left to do? Go out of business – is the answer. One could, of course, argue that it is extremely stupid to build a business that is totally relying on one source of traffic, and one would surely be right to say so.
Having said that, DDG serves content too. Or is trying to – via their Goodies section. This is both useful and annoying. If you ask anyone who’s not involved in web marketing, they’ll say that this is oh so convenient!
Google has done a tremendous job fighting cyberspam (and taking hundreds of thousands of amateur white-hat site owners down as a side effect) but because its algo still largely depends on links, the search results are inconsistent. If you’re searching for fresh content (unless you know how to use the advanced data ranges) you will be given old data. Because a forum post written in 2003 has accumulated more links, it will be served far ahead of a very recent forum post.
“The internet is fast becoming a ‘cesspool’ where false information thrives,” Google CEO Eric Schmidt said… “Brands are the solution, not the problem,” Mr. Schmidt continued. “Brands are how you sort out the cesspool.”
Really? I couldn’t disagree more! The internet is driven by information, the brands are driven by money. The brands want to give you as little truthful information as possible. But… because the whole idea of the internet is exchanging information, you as a user want to get access to as much truthful information as possible.
Can you imagine Kellogg’s telling you about the disadvantages of palm oil? Of course not because some of their products do contain palm oil. Would you want to be served an article about palm oil written by Kellogg’s only because it’s a big brand?
If the big brands were to adhere to the golden principles of the internet, they would have to tell you about the way they cut corners. They don’t want to tell you everything. Instead they will employ clever PRs, who will spin the story so that you are enticed to buy more stuff. Is this the idea of the Internet? Certainly not.
Can any startup search engine eliminate these shortcomings? I’d guess not at this point but in a couple of years time, who knows…
…or skip to the next section >>
Oh, let’s bring those old times back. Yeah, I admit, I’m a bit nostalgic. I keep telling myself that the change has done us good and that keyword stuffing, which used to be the top SEO trick, was ridiculous and didn’t add much value to the user experience. But still, I sometimes dream that I could open my Netscape browser and enter the following address in the bar:
Founded in 1994, it used to be big and popular. For its time, Infoseek.com had some pretty clever search functionality. Around 1997 it was attracting more than 7 million visitors per month.
I won’t deny, it was my favourite search engine because it had all those hidden extras that very few outside of the SEO community knew about. You could search for multimedia and they had the “link:” operator that told you who linked to whom. I was so excited about it back then!
In 1998 it was bought by Disney and merged into Go.com. The Infoseek brand name is still big in Japan – infoseek.co.jp remains within Top100 most visited Japanese sites. Currently, Go.com is a web portal using Yahoo! search and it is mainly famous for having an early 1990s look and autoplaying super-loud extra-annoying videos on arrival.
Why did Infoseek fail? Well, I’ll blame the corporate world. How original! I know! In 2001 a dozen of Infoseek employees attempted an aggressive buyout but Disney operatively extinguished the fire. The problem was that Disney had no strategy and no idea what to do with a search engine.
It was founded in 1994 by a team of bright Standford University students. Within the next year it attracted millions in investment. In 1996 they purchased another two early search startups – Magellan and WebCrawler. Despite having landed deals to become exclusive search provider for Aol, Netscape (still huge in the late 1990s), Microsoft WebTV and Apple, Excite always struggled financially. It was an OK-ish search engine with a database of 250 million entries but it couldn’t figure out how to monetize itself properly. In 1998 they were already making a steady loss but because of their database and exclusive deals, the ailing giant became a coveted prize for many corporate players. Yahoo! wanted Excite but didn’t get it. The ISP @Home wanted Excite and got it… for a whopping $6.7 billion stock exchange deal. In its hey-day Excite attracted 16 – 19 million visitors per month!
In 2001 the giant went bust and was subsequently acquired by Ask Jeeves. It never recovered even a fraction of its early popularity. Currently, it is owned by MindSpark and it seems it’s trying to compete with go.com for the title of the most ridiculous portal design. The search engine aggregates results from Google, Yahoo! and Yandex.
One of the coolest urban legends is the story of how two students Larry Page and Sergey Brin approached Excite@Home in 1999 wanting to get rid of the little search engine project they’ve been working on because it’s been taking too much time from their studies (crazy geeks, LOL). They were negotiated down to $750,000 but the deal never went through.
You may wonder why I called Excite an OK-ish engine. After all it had created the so called “Intelligent Concept Extraction” (ICE) to establish relationships between words and ideas. It’s a granddad of semantic search! Nevertheless, it didn’t support stemming, couldn’t return multimedia results, had very basic boolean support and in many cases returned rubbish results. Wanna rank high on Excite? Just add more keywords!
The main reason for the downfall? I think they were trying to be too many things at once. And they were trying to sound bigger than they actually were. You don’t employ 700 people full-time if your revenue is just $44 million per year! That’s just crazy!
Probably the best case study for those thinking of launching a search engine. The sad story of Lycos shows you what becomes of a business without proper strategy, planning and vision.
The beginnings of Lycos lack originality really – it’s another University startup with a $2 million venture capital behind them. By the early 1998 it had grown into a giant that wanted to get even bigger. To sustain its huge appetite, Lycos went berserk spending billions of dollars on any web-based company that it could lay its tentacles on. Yahoo! had Geocities? Lycos surely wanted a bit of that – that’s why they bought Tripod and Angelfire. At the same time, they didn’t really push their brand as the ultimate search engine. And, don’t get me wrong, the Lycos search was good. It had boolean search, it had various search operators and it was pretty reliable. The really cool feature that deserved to be followed through was the “Top 5%” that included hand-picked spam-free websites reviewed by editors.
The hey-day of Lycos was 1999. Its market value was $3.5 billion and with 7 million visits a day it was the second most popular web giant. The first being? Exactly, Yahoo! (according to some sources, Lycos even jumped Yahoo! to become the world’s most popular website but I haven’t found any proof to say that for certain. Maybe the readers can contribute to clarify this?)
In year 2000, just as the dotcom bubble was about to burst, the Spanish telecommunications giant Telefonica purchased Lycos for $12.5 billion – more than 3.5 times more than its market value. The next 4 years it went downhill. Eventually, Telefonica sold it to somebody in South Korea for $95 million or less than 1% of the initial investment. Man, that was even worse than when Mr. Bad News (AKA Rupert Murdoch) decided to try to dabble in the online world purchasing MySpace for $580 million only to sell it a few years later for 6% of its original value.
Ever since that, Lycos has changed its owners and direction several times. It has shed almost all of its original web properties, keeping the Tripod and Gamesville. Designwise, at the moment it is trying to look like Technorati. How strange is that – it has changed hands many times but it still hasn’t find its identity and direction.
When AltaVista launched in late 1995 it was considered the most advanced search engine out there. It had a state-of-the-art crawler that was able to quickly crawl and update more webpages than there were at that time. The engine had probably the most advanced search features of all its competitors. You could limit your search by time, you could perform search across different media. It could narrow and restrict the results, search through news, forums and shopping. AltaVista had wildcard search and it supported boolean.
Everything went well until it was sold to Compaq in 1999 and the latter decided to compete with Yahoo! and launch a portal. By 2002 the mothership understood that the portal was unsuccessful and, what’s worse, while they were trying to chase the rainbow, an obscure search engine called Google overtook Altavista by search volume for the first time. They decided to strip the service back to the basics but it was too late. AltaVista used to power the Yahoo! results, then it was acquired by Yahoo! and eventually killed by it. And AltaVista is no more...
Teoma wasn’t necessarily the most successful search engine of the 1990s but I couldn’t have possibly excluded it from this overview. They were the first to develop a link popularity algorithm called Subject-Specific Popularity, then rebranded to ExpertRank by its current owner Ask.com
It was launched in 1996 as a search engine focused on answering questions. Despite owning the Teoma’s unique link relationship algo, the Ask has never managed to gain enough popularity. Currently the brand is owned by InterActiveCorp. You could consider IAC some sort of a web giant. They own sites like About.com, Match.com, Vimeo, Meetic, Newsweek, Dictionary.com and many many others. They are making good money but in 2010 they chickened out and decided to kill Ask.com as an original search engine. Instead they decided to concentrate on Q&A serving Google results for the normal web-search. A little bit pathetic. I expected a good fight!
Another little engine that struggled finding its own face. It was launched by the founder of the Wired Magazine in 1996 and from its inception until its takeover by Lycos, the HotBot was trying to serve results from Inktomi (like many other search engines did), Looksmart, Direct Hit and ODP. As far as its features were concerned, a little gem that the NewBot was (a similar concept to Google News) it never got enough prominence. Overall, the engine was simply not original enough to last long.
And, man, there were more. If I was to mention all of them, you’d simply start snoring: JumpStation, Dogpile, Looksmart, Inktomi, Mamma, Splut, DirectHit, Metacrawler, Savvysearch, Thunderstone, MetaFind and many many more. They failed at the main metric – provide value for the user. They didn’t give the user what he really wanted. Google did and that’s why it has 62% of the market share!
…or skip to the next section >>
The WiseNut didn’t carry any of the baggage of its 1990s predecessors. It was a post-dotcom-bubble search engine launched in late 2001. Unlike some other search engines of the early 2000s, WiseNut had its own crawler and an impressive database with more than 1 billion urls. Compared to Google, it had fewer functionalities.
For example, it lacked boolean search. It was pretty obvious that WiseNut was quite a mediocre search engine. That, however, didn’t deter LookSmart from buying it for $9 million. Not very smart! LookSmart struggled with WiseNut for the next 5 years until they finally decided to pull the plug in 2007.
One of the most recent attempts at establishing an alternative search engine. Searchme was a visual search engine with the results being displayed as a stack of website previews. Its founders thought that people would be interested in previewing the websites before they chose which one to visit.
They only had to look back to recent history to find out that people didn’t really care about previews. In 1994 Excite hired editors and journalists to mass-produce short reviews of the websites in its index. Hundreds of thousands of reviews and millions of dollars later they had to admit that it wasn’t after all such a good idea and that people didn’t want a review, they wanted the real thing.
Anyway, by 2008, Searchme managed to attract $31 million in investment including money from an early investor in Google. By early 2009 they were serving some 200 – 300,000 searches per day, which, although sounds like a lot, was one tenth of what they needed to break even. The server costs alone were enormous considering they needed to keep rendering visually rich content. No wonder by the end of 2009 they had to close down.
If you want to check what visual search is all about, hop over to RedZee. It’s inspired by Apple’s Cover Flow and it’s mascot is really cool. That’s exactly how I would imagine a zebra had I not seen one at the London Zoo. Although it is still somewhat popular in USA and Pakistan, it’s difficult to see how RedZee could go mainstream. It seems that their chances were destroyed in 2009 when allegations of advertising scam surfaced.
You know what would be really funny? If, say, 3 or 4 of the vintage search engines decided to join forces, build a good-quality bot and emerge with a competing product. Sites like Excite, Lycos and Dogpile still have a significant audience regardless of the fact that they’re utter rubbish. And I’m talking many millions of visitors per month. Certainly, if you want to launch a search engine, having the initial amount of visitors to cater for is a great head-start.
Currently the surfers around the world produce 161 billion search queries per month. Google is handling 62% of those. Even if you manage to capture the amount of search queries that SearchMe needed to break even – 90 million per month. That’s only 0.056% of the total world search market. To think that you could make money with such a negligible search share, it’s surprising that there are so few companies taking on Google. Of course, I’m trying to make it sound much easier than it actually is! 🙂
The purpose of this little detour into the history of the search engines was to show that the only way for a business to succeed is to have a vision and a clear-cut action plan. When Google started out, their goal was to create THE Search Engine, and they began branching out into other areas ONLY after the main goal was attained, whereas everyone before them had been trying to grab as much as possible. In such cases I always mention the example of the motoring maverick – John Z. DeLorean – when his car manufacturing business started to struggle, what did he do? Did he concentrate on the cars? Ha, in fact he was found to have his fingers in no less than 36 additional business pies… yeah, not a great strategic move.
…or skip to the next section >>
I asked the experts what they thought about the future of search. Some of the ideas are truly inspirational. I hope you do have a free afternoon 🙂 Please note that these are quotes that I publish without editing (apart from cutting down) and that I don’t always agree with the experts. Equally, the experts haven’t seen my article before publishing and they’re not responsible for any statements I’ve made.
These are the questions I asked the experts and some of their answers are pretty stunning. I guess that’s why they’re called experts.
I’m mainly interested in your take on “then and now”. There were at least 10 “large” search engines competing for traffic in the 1990s. It has all changed now. Do you think it is good or bad for the user?
Would you consider it a positive thing if Google was actually pushed hard by a couple of worthy competitors? What is it going to be like in 2023? One omnipotent Google or will history repeat itself and we’ll see 10 engines competing?
I remember someone saying around the end of the 90s that there were “14 engines important enough to submit to manually”.
From today’s perspective, those were very innocent, naive times. The algos were simplistic, deriving from information retrieval technology. The web was so small that you could almost make a title change and submit to, say, Infoseek and watch your rankings change. Today, the scale of the web is completely different. In the year 2000, Google’s entire index was roughly a billion pages. In 2012, Google was indexing 20 billion pages a day. That’s an astonishing difference. The algorithms of the 1990s would be completely inadequate today.
You could definitely watch keyword density in action. One of the most outrageous demonstrations of keyword density that I remember was an example of a page with a single word (I think an h1) ranking. This was before Google bombing. Much of what was called SEO was essentially a discussion of how much on-page, on-site manipulation would be tolerated before you crossed the line.
Getting into the Yahoo Directory was considered a big thing, and it did drive traffic. The Yahoo Directory was human reviewed, and it often provided a better experience than search did.
Google completely changed the game when it became Yahoo’s search engine, and the difference was obvious to users. My guess is that while Google’s early success had to do with PageRank and other aspects of the algorithm, part of Google’s success may be owed to having the Yahoo Directory as a seed set for the index.
I’ve always preferred diversity as a long run survival mechanism. In the long run, I don’t think that those 10 or 14 old-school engines provided any real diversity. The early search algorithms did little to boost site quality.
The introduction of link relevance signals and considerations of page importance were a major gain for users, not only improving search results, but also improving the quality of the web. The approach of creating high quality content to attract inbound links became a part of many SEO strategies.
Google in particular is continuing to push not only toward what it sees as better quality content, but toward better user experience… and it is now employing personalization and big-data modeling of searcher behavior as part of its algorithm. The approach has been controversial among many, who see it as a threat to privacy and perhaps to individual freedom.
It’s necessary to point out that in the years between then and now, the internet has brought about huge societal changes, with many business areas, like music and publishing, completely changed by the web… and concepts of intellectual property challenged. There has been both great consolidation and fragmentation. Questions of “good or bad for the user” inevitably are going to get tangled up with these changes, and whether they’ve been caused by the internet or by individual search engines.
I’d very much like to see some stronger competition in the search area. I suspect that Google sees the potential of competition, albeit probably not simply in phrase-based search.
It’s likely that any new search enterprise is going to be resource intensive. Google has a giant lead on most competitors with regard to its own intellectual infrastructure, and I don’t think that can be easily circumvented. I don’t know whether it’s likely in this area that some “nimble startup” is going to manage an end run.
It may be that any new engine will start from a niche area and move into general search. In the mobile area, eg, I’d guess that Apple is Google’s most likely competitor. In the ecommerce search area, Amazon perhaps has the capability, though there may be no reason for them to help you find their competitors. Bing has some very good people, and, conceivably, a combination of Facebook and Bing might give Google some worthy competition. Of English language search engines out there apart from Bing, Yandex results are occasionally interesting enough that I feel I should take a deeper look. Hard to say whether Blekko will ever emerge as an engine that a lot of people want to use. Yahoo is making some moves suggesting that they may come back, and I’d love to see what they’d do.
Currently, there is no engine even remotely close to Google in understanding context. I think it’s extremely unlikely we’ll go back to 10 “big” engines. I can imagine perhaps three or four big engines evolving from the current field in the next ten years, and I can imagine all sorts of possibilities for different kinds of search.
More competition is always good for the user. It would be great for Google to have some competition. It will need to be someone who has a different approach to search using new proprietary data – right now Facebook is probably the only contender.
In 10 years search could be always on and real time, meaning that a search engine is always presenting things relevant to what you are doing based on your activity, for example right now as I email this to you a search engine could be showing me information in the sidebar related to what we’re talking about and help me add value to what I am doing in real time.
I definitely think the one search-engine market is detrimental to users. Google would not be as aggressive with their ads if there was more potential repercussion – they haven’t really “stepped over the line” with how they monetize, but they are walking it. However, I still think they do an incredible job – they simply have to be, or wouldn’t be dominating as they are.
Search engine upstart DuckDuckGo seems like the best possible challenger to Google. Their traffic is going up and they have a good process, but I still think we will see Google at #1 in ten years. I just hope they have a smaller than monopolistic market share then, as they do currently.
I feel that having one major search engine is good for the searcher as a company like Google is capable of spending significant resources to perfect the search results for the searcher. The results from Google are getting more and more robust and the experience for the searcher continues to improve vastly.
However, the experience for the content creators on the Internet, the webmasters/owners of websites, is getting worse and worse as Google continues to dominate the search industry in the United States. We see Google making it increasingly difficult for smaller websites to gain prominence in search results. Established brands and established quality destinations maintain their position in the search results. Again, this is fine for the searcher as their answer to their question or product they wish to purchase is easily found to a satisfactory level. Google, in essence, is doing their job.
But it does not create the opportunity that it once did for smaller players to enter a large competitive space as easily. Is this something that should be corrected? Probably not. It is common scenario for any maturing marketing channel. An area of concern however is Google’s movement towards being a content provider within the search results using partners or information gathered from crawling the Internet.
One such example is found in the travel industry and another is found in a search for facts. Google claims this is provided only to improve the searcher’s experience, and perhaps it very well does. But does it create a fair opportunity for those that lose significant traffic, exposure and, ultimately, revenue from changes that Google can make without warning?
A business is best to remember that although Google may be a great source of traffic, exposure and revenue for a business, the business must take the time, energy and resources to strengthen other areas of marketing (both online and offline) to create a stable business.
There are many clients that Loud Interactive deals with that have over 40% of their revenue coming from Google. Many other clients have contacted Loud Interactive in order to help regain losses they have had in months or years passed. This can be a scary situation for businesses, one that Loud Interactive is happy to help resolve (and quite often times does), but one that can be avoided by having a balanced online and offline marketing strategy.
As for the future, China’s Baidu will be an interesting player to watch for the future. They have a significant position in China and, as we all know, there is a large percentage of the world’s population in China. Google’s biggest threat comes from the political landscape of the world in regards to China. Facebook is also a viable contender. However, they have done a phenomenal job staying focused on what they do best…social media. Zuck will likely keep the company squarely focused on that goal. If the time ever comes that he takes a less active role, however, there may be a pivotal shift in Facebook’s direction, one that would likely look towards search.
Looking 10 years ahead though…that’s too far. 10 years ago we would not have been able to predict that we would post most of daily activities on the Internet from a device that was a mini-laptop, phone, camera, camcorder that fits in our pocket.
From a user’s perspective, I’d say that seeing a decline in the number of large competing engines speaks volumes to the quality of the results one can expect to find in the engines that have stood the test of time. The simple fact that people are easily able to find what they’re looking for within the top 3 (Google, Bing, Yahoo!) means that they’re doing a great job giving the users what they want, which most would consider a win.
The downside to this, however, is what’s happening behind the scenes in terms of collecting users’ information for the purpose of boosting advertising revenues. There is a growing number of people, I believe, that are growing more hesitant to submit their queries to Google for fear of how it may be used to fill Google’s pockets over time.
From where I sit, Google IS the internet. They’ve extended their brand beyond the basic search engine. Have more or less established a monopoly on all areas of the web (except for one, which I’ll cover in second). It’s difficult for me to foresee something else coming in and presenting competition worthy of disrupting Google’s empire on the web. If something does come along that could pose a threat, I think Google has enough money to swallow them whole before anyone even knows what happened.
Where Google has yet to succeed is social. Facebook more or less owns that corner of the web right now, but that may not be true several years down the road. With Google+ signups increasing and Facebook signups remaining fairly steady, it’s hard to predict who is going to own this area. Facebook’s introduction of Graph Search is indicative of the lines blurring between search and social, but I really don’t think Facebook is going to be the company to pull it off successfully.
Ever since Facebook’s IPO, their value has been tanking. They are going to continue to look for more ways to monetize, and that means more advertising at the expense of their users. User confidence will eventually decline, and people will look for the next big social network. This could be where Google takes all, either that or we’ll see a new social network rise up out of Facebook’s ashes.
So to answer the question, I think competition is healthy for Google. While I don’t see them ever being toppled by a competitor in the future, having sites grow to the point that Google will recognize some of its own slights will only help to improve the quality of services provided by the web giant.
Thinking ten years down the road, who knows what form the web will take. Regardless though, you can be sure Google will lead the charge and have their hands in every single part of it.
I mentioned user confidence in terms of Google collecting information for advertising purposes. Duck Duck Go is gaining a lot of attention for being a search engine that doesn’t track or collect user information. Their algorithm is a bit wonky and I feel the user experience is lacking compared to Google, but over time they could build a great product users can trust. I just question how much they’d be able to perfect something without eventually monetizing at the users’ expense.
The number of search engine players isn’t the measure of good or bad in my opinion. The quality of the search results is key. Google now dominates and has what many perceive as monopolistic control. This has changed the playing field in a dramatic and very positive way, overall. Crowded SERPs and the diminishing real estate for those good old 10 blue links isn’t so good for the user.
Competition is healthy!! Yes! Bing is doing a tremendous job in pushing that search envelope and keeping Google on their toes. Bing offers a fantastic user experience and integration with social that has surely contributed to Google’s push of social and behavioral integration. It would be hard to imagine that 8 additional search engines would rise to compete like in the 1990a. Yahoo!, under the guidance of Marissa Mayer, is now worthy of stock purchase, and it will be interesting to see if and how referral traffic from Yahoo! Increases over the next couple years.
2023 is a long way to predict. I imagine we will see search integrated into every aspect of our lives, and our identities will be tracked across logins on multiple devices – desktop, mobile, tablet, TV, stereo, game box, etc. Heck, I’d love to have a search box in my house that works like the Find My Phone app to find lost items. Possible? Surely it is.
Keep an eye on and use DuckDuckGo, which is based on structured data and semantics. We should, however, be thinking outside of normal search for what’s next and focus on the effects of Siri and the data “she” collects provides to us. Search will become our virtual personal assistant.
Google is going to continue to refine their search for a better user experience. I don’t think they’re going to worry about a competitor coming & taking market share so long as the focus of Google remains innovative.
Competition is healthy in the search industry, just like any other. Google might see something great that a smaller search engine is doing & either buy them out or look into adding their own feature based on how well it impacts user experience & how well it impacts bringing data to users as fast as possible.
The old days: Back then we weren’t thinking in terms of users because everything was new and we had no good data on what people wanted. This was a time of great fun for SEOs and site owners. As an SEO during that time, my job was to get every site into a search engine and directory. This was a full time job, just the submitting, tracking results, tweaking pages, making updates, and reporting. Then linking came, paid results and one by one, search engines and directories died out.
Today, Google and Bing are all about the user experience. I saw this coming in the year 2000 and experienced the results when my training in usability and human factors was used along with organic SEO. What this means is that today, how people use web sites is as important as what they search for. The search engines that survived the dotcom crash days are the ones who figured out that humans are not robots.
Competition: Google is strong not only because of their search engine but also because of all the extra toys that come with it, including Chrome and Google Plus. Bing and Yahoo! have to learn what people want just like Google has done.
I’m not aware of search engine startups but do see the continued competition in the directories space and directories that include site search. This is because how we look for information online varies by person, interest, age, need, and more. We will have more choices on niche directories for industries.
Since content and expertise are important to us, it will be the sites with content written by proven experts and delivered in a readable user interface that will find success. Fee based, password protected forums and specialty sites will become popular because users will know there will be no spam or unrelated content in them. Again, success is driven by quality and knowing what people want and need will determine who thrives.
Future: Where we get our information will adapt to how we use the Internet. People often don’t realize that YouTube is a search engine. You can ask a question there and get lots of answers in a visual format. Ebay is a search engine for products. Etsy is an example of an online global mall where you can search for items and find the small shops and crafts persons who make the items and sell them.
Pinterest was a way for women to search for and share items they cared about. This small grassroots start eventually turned into a monster site that enables people to search for items of interest and the results offer what other people loved and recommend.
The acceptance of mobile devices and smaller monitors will create a new set of user needs. Responsive design is just the start. Voice is still quirky and not always accurate. And finally, accessibility. Someday I’m hopeful that all web sites can be used by everyone, regardless of any physical or mental limitations. As long as we get answers online, there will be a need for search. Maybe someday there will be a search engine that everybody can use.
So, what does the future hold for the search? I’m quite taken by what Michael Volpe suggested – the real-time search. It’s pretty close to how I imagine an ideal search engine of the future. Something that is literally at your fingertips, something that is not limited by the input method. It could be search that is activated by voice and that learns everything about me and responds to my current needs. With a model like this, it all comes down to the problem that we have today – privacy.
When I type “Dear Sir” in my Word document and the jolly Paperclip character says: “Hey, are you writing a letter?”, I find it amusing. When I mention “attachment” in an email message and Thunderbird asks me: “Did you forget to add an attachment?”, I find it helpful. But is this intrusion bordering on annoying? What will happen when the artificial intelligence hidden in my machine starts stalking me and reacting to whatever I do? I guess it takes a new frame of mind (that we might well have in 10 years time) and a 100% assurance that what happens on my machine stays on my machine.
Additionally, there is a privacy issue of several people using the same machine. I don’t really want the search bot say: “Do you want to look at those pictures again?” when it’s actually my GF browsing the web.
What if we could hack into the human brain and learn to use the “technology” in computing (including the search engines). Jeff Hawkins – the tech genius who designed the Palm Pilot – is now working in the field of neuroscience and perhaps unbeknownst to himself he’s outlined a mechanism that could be applied to the future search engines. In his “Memory-prediction Framework” he argues that “the key to the brain and intelligence is the ability to make predictions about the world by seeing patterns.” We could certainly use a prediction-based search engine that understands patterns.
Another important figure whose works might become useful for the next generation of search engine builders is Oliver Selfridge. He’s often considered the godfather of the pattern recognition theory. He dreamed about a machine that could record events, recognise patterns in them and trigger subsequent events according to the recognised patterns.
Regarding the search power distribution, my ideal scenario would be a landscape similar to the mid 1990s with up to 10 strong players competing with each other but competing with a different goal in mind: usefulness. If your business model is making money by answering questions, you have to make sure your answers provide more value than your competitors’ answers. I don’t believe that one omnipotent search engine (be it Google, Bing, Yahoo, or EngineX) is good either for users or for online businesses.
Monopoly is never controlled by the customer; a monopoly constrains. The Internet will eventually come to realize its highest goal – to provide FREE and unlimited access to information. So, that’s my vision. What’s yours? Please add your comments below:
<< go back one section…
<< back to the top…
Arvid Linde is an independent SEO consultant, award-winning journalist, MSc in engineering, published author and a technology addict. More info on the about page.