search engine | Ian Andrew Bell https://ianbell.com Ian Bell's opinions are his own and do not necessarily reflect the opinions of Ian Bell Wed, 29 Jul 2009 22:17:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://i0.wp.com/ianbell.com/wp-content/uploads/2017/10/cropped-electron-man.png?fit=32%2C32&ssl=1 search engine | Ian Andrew Bell https://ianbell.com 32 32 28174588 Facebook built my Feedreader https://ianbell.com/2009/07/29/facebook-built-my-feedreader/ Wed, 29 Jul 2009 22:05:23 +0000 https://ianbell.com/?p=4910 I have always asserted that Facebook’s most valuable asset was its event stream (which you can see by clicking ‘home’ from within Facebook). It shows you what’s happening in your network. The other day I complained about having to block more than 100 apps within Facebook to keep their spam out of my daily flow.  A pain, for sure, but this has had substantial effect in making Facebook more usable to me day-to-day.

I haven’t ever really thought of Facebook as a productivity tool (more the opposite) until lately.  I have tried using many feedreaders, including Google Reader, Firefox, YahOo, NewsGator, and more over the years.  More recently I have abandoned many of those in favour of a desktop client like Nambu or  Tweetie in respect of the fact that I think my friends (or at least those that I choose to follow on Twitter) are the best filter imaginable — and far better than any search engine algorithm could ever produce.  Therefore, Twitter has quickly usurped my consumption of RSS as mediated through most aggregation tools — simply because no search engine I’ve found is good enough at understanding me and filtering the crap.

facebook-feedreaderHowever, as more and more of my friends are adding the Twitter tool to their Facebook accounts, and thereby syndicating their tweets to the Facebook status updates, I am turning to the desktop clients less and less and spending more time watching the event stream in Facebook.

I still post 100% of the time from within a twitter client on my Macbook Pro or my iPhone (fanboi) however the time I spend reading stuff that comes to me via those clients is decelerating pretty rapidly. In fact, my biggest gripe about Nambu for the iPhone is that it insists on loading up my event stream before I can post a new tweet (try that on EDGE).

Facebook’s event stream has one key advantage that Twitter doesn’t.  If I find someone on Twitter is annoying me with their posts, the relationship is fairly binary: I either follow them or I don’t.  However, with Facebook this is quite nuanced.  Specifically, I can Hide updates from people I’m friends with who post garbage.  This is an improvement over Twitter, but is again too binary (and is punitive to Facebook’s parity-centric follow model).

What I think both Facebook and Twitter users would benefit from are two nuanced approaches to promoting or demoting content in the event stream.  There are computational effects here that are not trivial, but this is the kind of stuff we were working on at Something Simpler.  I want a thumbs up / thumbs down on both users and content.  In the user context, a thumbs up promotes the user in the priority tree, thumbs down demotes.  In the context of the content item (a tweet or otherwise) I am training a bayesian filter that pulls keywords out of the content and promotes or demotes similar future items depending on which I selected.  The engine must then score the content based on who it came from and what the extracted keywords are.  It can also look at my OWN event stream to train the Bayesian filter and surmise that the things I post about are likely to be similar to the things that I want to read.  These embrace the peer relationship while increasing the quality of my event stream.

I think this is a rejection of the hypothesis that I held when we were originally offered the chance to find a place for the PubSub assets:  that machines could judge this on their own, without our help.  The reality is that’s expensive, inefficient, takes too long to train, and doesn’t return enough immediate near-term benefit to the user.  However the methods described above are very relevant, embrace the social nature of online networks, and displace the heavy lifting of filtering on to my friend networks, which Twitter et al already accomplish.

The first tool to implement these features gets my vote to be my primary feedreader.  Nothing’s more relevant to me than my friends.  Neither Facebook nor Twitter is doing enough to make their knowledge of who my friends are useful day-to-day.  That’s all they’ve got versus other tools today.

]]>
4910
The Open Debate on Chinese Internet Proliferation https://ianbell.com/2009/07/22/the-open-debate-on-chinese-internet-proliferation/ https://ianbell.com/2009/07/22/the-open-debate-on-chinese-internet-proliferation/#comments Wed, 22 Jul 2009 09:09:15 +0000 https://ianbell.com/?p=4887 pay-no-attention-to-the-man-behind-the-curtainStatistics lauding the growth of the Internet in China have become so commonplace as to inspire yawns, despite breathless press reports of hundreds of millions of Chinese going online and signing up for the ‘net.  With the Chinese Government declaring that their internet population surpassed the US last year, it would seem that the real opportunity for expansion and growth online is not in the West, but somewhere behind the Great Firewall of China. Cue the ads for Chinese Web Hosting, Chinese Industry Liaisons, and the omnipresent legions of Chinese “business agents”.

Many Western technology companies have heeded that call, but have found themselves cast into the rocks on Chinese shores — including companies like Microsoft, Google, Cisco, eBay, and YahoO!  The massive markets just never seem to have materialized in the Orient for these giants, or when success has loomed on the horizon the murky Chinese bureaucracy has stepped in to snatch defeat from the jaws of victory.  Partnerships have vapourized overnight, and (particularly in the case of Cisco) core Intellectual Property has been outright stolen, reverse-engineered, or redistributed.  Perilous waters, indeed.

So it was with this skepticism that my friend Gersham viewed the latest piece of propaganda emerging from our friends in China that we have now reached the new height of 338 million Chinese Internet users — a 13 percent increase since the end of 2008, and just about exactly one quarter of the country’s population.  All of this, of course, seems to have been tabulated and distributed by the slightly inaccurately-acronymed Chinese Internet Network Information Centre (CNNIC) which, by its own admission “takes orders from the Ministry of Information Industry (MII) to conduct daily business.”  In fact, Google “Chinese Internet Traffic” and you’d be hard-pressed to find data that did NOT originate from the CNNIC.  Hmm.  Call me a cynic.

gdp-per-capita-east-asiaIt is likely difficult for most (any) of us to corroborate or even conceptualize these high numbers, but it seems suspicious nonetheless — particularly from a country whose median income is around $3400 and whose Per-Capita GDP is ranked 104th, right behind Armenia.  In trying to substantiate this, once can point to Alexa’s site rankings which currently reveal that 3 Chinese-language web sites rank in the Top 20:  Search Engine Baidu (#9), IM chat and portal QQ (#14), and portal Sina.com.cn (#18).  Sounds good, right?  But look closely at the rankings.  Baidu, an undisputed leader in Search for China, reaches 5.73% of the internet populace, whereas Google.DE (#13) reaches roughly 3% of global internet users while servicing German, Swiss and Austrian users exclusively.  Combine the populations of these three countries and they don’t even add up to 100 million people.

Gersham pointed me toward the Firefox Download Stats, where as of this writing Germans have made 4,948,666 downloads of various firefox versions compared to only 672,972 for China.  Again, Germany has a population of 82Million vs. 1.3Billion in China.  As a control, Americans have downloaded Firefox 7,959,727 times as of this writing.  Do the Chinese really just prefer Internet Explorer?

In January 2009, Comscore measured the Chinese internet audience at closer to 180 Million users, still an impressive 18% of the Internet population.   This site quotes murky Nielsen Online data pegging Chinese Internet Users at roughly 300 Million.  Beyond these heresy reports, empirical measurements are difficult to come by.

So, let’s throw up our hands and try to reverse-engineer the data using published stats.  According to June 2009 data from Comscore, Google has captured 65% or so of US Search Traffic.  This made it the #1 web site in the world, with 157 Million US Visitors in June, according to Comscore.  In the Chinese Market, Baidu has captured 73% of Chinese search, with Google in the Number Two spot.  Yet Baidu.com barely moves the needle by comparison, according to compete.com, alexa.com, and others.. hitting roughly 600,000 unique visitors per month globally.   High-side estimates of the Internet’s penetration in the US peg it at 72.5% of the populace, or about 220 million.  This makes the data on Google’s penetration vs the addressable market reasonably accurate (71% if you do the math).  Following this logic, if Baidu in fact has 73% of China’s purported 338 Million users, it should be ranking as the #1 web site by far, with >246 Million unique visitors per month.  In fact if any of this data were true, then Chinese sites should occupy at least 4 of the Top Ten global web sites.

Whatever your opinion of Compete’s and Alexa’s relative methodologies, it’s impossible to reconcile anything even close to the numbers coming from the Chinese Government.  If that isn’t good enough for you, let’s turn to profits.  While serving what was allegedly the world’s largest internet audience, Baidu appears to be tracking to earn about $500 Million in revenue this year.  Google’s revenue appears to be tracking to about $23 Billion for 2009 with its pithy 157 Million unique visitors.  Any way you slice it, if China’s internet userbase is as large as Beijing says it is, and if Baidu’s market share of that audience is what it’s widely purported to be, then both the number of uniques reported by external traffic sites and the revenues reported by the public company that owns Baidu should be exponentially greater.

These stats seem to either indicate that Chinese do not use search very often, or that there just aren’t too many of them heading out into the wilds of the Internet.  Either way, statistics emanating exclusively from bureaucratic sources within Beijing, particularly those which seem to fly in the face of all other external metrics, are not to be believed.  The thesis of this post is not to suggest that China is NOT a massive opportunity for online properties and other technology purveyors, it is simply an attempt to point out that, like in a lot of cases in dealing with the Peoples’ Republic of China, things are not what they may seem.  Pay no attention to the man behind the curtain.

]]>
https://ianbell.com/2009/07/22/the-open-debate-on-chinese-internet-proliferation/feed/ 10 4887
Google is a Kludge – Or Why Search is Going to Change https://ianbell.com/2008/06/20/google-is-a-kludge-or-why-search-is-going-to-change/ https://ianbell.com/2008/06/20/google-is-a-kludge-or-why-search-is-going-to-change/#comments Fri, 20 Jun 2008 21:40:01 +0000 https://ianbell.com/2008/06/20/google-is-a-kludge-or-why-search-is-going-to-change/ 411us.jpgDespite the fact that I often find myself on the opposing end of the table on most of what Microsoft does, I was really hoping to be able to agree with Ballmer on his assertions regarding Microsoft’s rejuvenated focus on search as quoted in today’s Financial Times article. I was hoping that, on the heels of their disastrously failed hostile takeover effort of Yahoo! that MSFT had a plan for Search that extended beyond paying people to use its engine, which has led to some amusing arbitrage opportunities reminiscent of late bubble-era scams.

Of course, Microsoft can afford to write these cheques practically ad infinitum, but if your tools are so lacking in perceived utility that you need to bribe people to use them (even if the graft is partially subsidized by affiliate fees), perhaps this is not really the best you could hope for from your marketing team.

JC-SearchShareMAY08-4.1.gif

You can’t take on Google by trying to buy, or even out-feature, your way into the blank-text-box Search Engine arena. Except for some regional players, like Russia’s Yandex, they’ve won and will not soon be replaced.

What Ballmer, and lots of other people, are missing is that the Search marketplace as we know it is poised for a change. Much of this change emerges from the fact that Google fundamentally owns the global Search Market, but much of the opportunity extant in this space comes from the fact that the technology behind search, and how people will make use of search engines in the future, will be a whole lot different than what you see when you type in www.google.com today.

global-serach-ranks-1207.png

…. but, there is light at the end of the tunnel for folks who are on the outside looking in at Google’s substantial (and impossible to dislodge) market share:

For most people, web search is a kludge.

Think about how you use Google today. Think about why you type things into that blank text space beckoning to you on your Firefox browser, or why you surf over to Google.com and enter a few snippets of text into that empty area amidst the sea of clutter-free Google whiteness ten, twenty, or maybe many more times per day.

In some cases, you overheard something being discussed in a coffee shop. Or you saw a billboard ad. Something offline motivated you to head to the blank text box and ask it to do your bidding. That is Google’s fundamental market opportunity and has remained largely unchanged since the first search engines began emerging in 1995.

This is, however, just a fraction of the reasons why many of us head to search engines. Often the reasons are as much motivated by inadequate information at one site as by anything else. An example: You’re reading an article from a wire service like Reuters, which rarely include photos, about a car or a submarine or a mountain. You’d like to see what that looks like, so off to Google you go. Or you’re looking at a new LCD on eBay, but the seller hasn’t listed the number and type of inputs that come with it; so off to Google you go to try and find the specifications.

In short, most often we go to Google to search for things because our browsers aren’t good at building pathways between like objects on the web. These types of Searches are what I call context-driven. You shouldn’t need to do this. You shouldn’t need to interrupt your surfing to drop off to a third-party site in order to add flavour to the web objects which have already garnered your interest.

What if you could press a button and instantly be delivered relevant information that is contextual to that which you are/were looking at? What if sites displaying articles from wire services (notable for their sparseness) were able to draw in information – in realtime – which added relevant photos, videos, or related stories?

Some of this is already happening, albeit rather jerkily. One of the leaders which started doing this some time ago was Sphere, which was recently acquired by AOL. It took them some time to draw the same conclusions as I have, and they had a difficult time monetizing these services. But on a great enough scale the same technologies which make relevant content possible also make relevant advertising possible. And while click-thrus will be fewer in quantity they can be greater in quality and therefore infinitely more valuable, thanks to much more accurate targeting.

Being accurate in driving these sorts of searches is hard. Whereas Google relies on its users to sift through its top 30 or so recommendations to find the most relevant information, contextual search engines need to be able to do that with high accuracy on the first few matches with little to no meatware — sorry, Mahalo. Many of the current buzzwordy trends such as the Semantic Web initiatives, Social Search, the shift from RSS to Atom, and API-accessible semantic processing are key enablers to make this easier, but there’s still a considerable amount of R&D necessary to beat Google’s current level of accuracy in this regard.

As a result, you need a long lead to get there, and few of the companies dabbling in the Vertical Search space have raised enough capital or have investors who have committed to developing these opportunities. But in the long run, this will augment Web Search and replace much of the traffic that is today driven by Google’s simple, primitive, empty text box.

What’s clear is that Microsoft’s desperate attempts to lure users to its essentially equivalent service to Google’s can only cost its shareholders. A new paradigm is necessary and, fortunately, the opportunity is ripe for the picking, right in front of us all.

This is a rare opportunity where the solution lies in good, solid R&D and product realization — not in leveraging semi-monopolistic product integration or in brute force advertising spending. Is Microsoft bold enough to understand, and embrace, the fact that Search is shifting? Do they have the product and engineering people to make this happen?

]]>
https://ianbell.com/2008/06/20/google-is-a-kludge-or-why-search-is-going-to-change/feed/ 2 4224
Search Goes Open-Source https://ianbell.com/2007/05/01/search-goes-open-source/ https://ianbell.com/2007/05/01/search-goes-open-source/#comments Tue, 01 May 2007 17:34:27 +0000 https://ianbell.com/2007/05/01/search-goes-open-source/ Vanilla NinjaIf you happen to be, like me, in the throes of hoisting a company that incorporates some flavor of Search technology as a key capability, you know that its value in managing and sorting the torrent of internet information pouring out of blogs and everything else these days is essential to the success of the business. This is true for Google, Technorati, YahoO!, et al as it is for any content-oriented business. With the ever increasing flow of noise out there it’s harder to find the signal: When I search for Vanilla Ice Cream why do I stumble over Vanilla Ninja? The real problem though, is this: although having an effective matching engine is critical to the success of the business, search is not in and of itself all that interesting. Google was probably the last company that made search itself interesting as an end-user value proposition — and as we all know, what really made Google interesting in the long run was what they did with users (and avertisers) once they had ’em hooked.

These days, solving the search problem is just one step on a long path to building valuable services that people enjoy and make use of every day. Deep nerds like tackling these issues because they have all the hallmarks of geek chic. They are difficult algorithmically, require massive planning from a scaling perspective, and require constant tweaking. Google was successful at attracting people to its search engine for two reasons: it had a cleaner interface (they hadn’t decided to become a Portal) and it had more accurate results (the other engines had become gamed). I’m sure Google has tons of patents around their search capability however I am too lazy to search for them because sifting through the results separating wheat from chaff would take way too much time.

And that, dear reader, is precisely the point. Google, too, has been gamed — as will every search engine that comes into common use. So what am I on about? Well, Jimmy Wales wants to open-source the search engine… and for the record I think it’s a great idea, and one that threatens GoOgle substantially.

My logic is this: If the value of a search engine is no longer the search engine itself, but instead the application to which it is applied, then why not accept its value as a generic must-have and open-source the thing? We can all benefit from the assumption that the search engine itself will always be gamed by spammers and sploggers and search engine marketers. Once we do that, creating a community that is invested in the efficacy of the search engine (because they’re making money from it) also creates a system by which that community is incentivized to keep the thing working properly as it’s gamed by persistent SEO gremlins. This is far more effectively done by a collective of companies than it is by a bunch of companies tweaking their own engines independently, pursuing near-term, interim, proximate advantages.

Wikia Search is nudging closer to existence, but I think it’s applying the brute-force labour at the wrong end. As Jimmy Wales becomes more and more assertive and aggressive with his crusade to fix search using an army of lemmings using the Wikipedia recipe, which means he’ll use the community at large to determine the merit of matches found by his search engine, he’s extending the Wikipedia model to searching. Users will “vote” matches to the top of the rankings.

interesting notion, but I think he has the right idea but might be missing the mark on execution. As anyone who’s watched Sanjaya on American Idol can attest, user-voting is not always the most expeditious method of ensuring quality. Wikipedia uses a broken-source (have I just coined this term?) publishing model: it achieves one thing very well (aggregating information and content from diverse sources) at the expense of the other (ensuring that information is trustworthy, balanced or factually correct is problematic). Applying this model for Search is therefore solving the easy problem (search engines already aggregate and index things quite well) with the wrong method (envision Wikia Search gaming teams in Bangladesh sweat shops “voting up” rankings for their customers on the engine).

So, right idea — wrong solution. Let’s create an engine that everyone can (and does) use, that everyone can tweak and repair, and that is policed by a foundation which has as its only goal the efficacy of the product. The Deep Thinker Nerds who like to fiddle with these kinds of problems will be attracted naturally to the project, and their incomes could easily be supported by the companies benefiting from the expansion of the technology. Jimmy’s in a position to lead this, to some degree, but he doesn’t evidently understand that the strength of Wikipedia will be the achilles heel of this project. He has claimed the high ground but I fear that he will inevitably fail.

-Ian.

]]>
https://ianbell.com/2007/05/01/search-goes-open-source/feed/ 1 15
Verisign’s Domain Redirects https://ianbell.com/2003/09/23/verisigns-domain-redirects/ Tue, 23 Sep 2003 17:15:15 +0000 https://ianbell.com/2003/09/23/verisigns-domain-redirects/ From: Jeffrey Kay > Date: Tue Sep 23, 2003 7:20:55 AM US/Pacific > To: FoRK > Subject: Verisign’s Domain Redirects > > Seems like DNS is in trouble yet again. This is a pretty interesting > issue. > One could argue that managing a root gTLD server is a public trust […]]]> Begin forwarded message:

> From: Jeffrey Kay
> Date: Tue Sep 23, 2003 7:20:55 AM US/Pacific
> To: FoRK
> Subject: Verisign’s Domain Redirects
>
> Seems like DNS is in trouble yet again. This is a pretty interesting
> issue.
> One could argue that managing a root gTLD server is a public trust and
> Verisign is violating that trust.
>
> — jeff
>
> VeriSign stands firm on domain redirect
> Last modified: September 22, 2003, 6:07 PM PDT
> By Declan McCullagh
> Staff Writer, CNET News.com
>       
>
> VeriSign said Monday that it would not abandon its decision to point
> unassigned domain names at its Web site, but representatives did say
> the
> company would form a technical committee later this week to look into
> the
> problems caused by the change.
>
> During the last week, criticism has steadily grown over VeriSign’s ”
> SiteFinder ” service, which has caused problems for network
> administrators
> and confused spam-blocking utilities. A number of Internet standards
> bodies
> and administrative groups have asked the Mountain View, Calif.-based
> company–which enjoys a government-granted monopoly over the .com and
> .net
> registry–to stop, and a second lawsuit seeking an injunction against
> the
> practice was filed Monday.
>
> On Monday, VeriSign spokesman Tom Galvin said SiteFinder would remain
> in
> place because “we think the technical review committee is the
> appropriate
> mechanism before making any long-term decisions about the service.” The
> committee members who will be chosen by VeriSign and will report to the
> company will be announced later this week, Galvin said.
>
> “All indications are that users, important members of the Internet
> community
> we all serve, are benefiting from the improved Web navigation offered
> by
> Site Finder,” VeriSign Vice President Russell Lewis said in a Sunday
> letter
> to the Internet Corporation for Assigned Names and Numbers (ICANN).
> “These
> results are consistent with the findings from the extensive research we
> performed.”
>
> ICANN is the nonprofit organization that oversees Internet domain
> names. On
> Friday, the group asked VeriSign to pull the plug on its “wildcard”
> redirection service.
>
> Since then, ICANN’s Security and Stability Advisory Committee has
> published
> a more-detailed critique of the technical problems caused by VeriSign’s
> move. The committee–which includes a VeriSign representative–said it
> would
> hold a public meeting in the Washington, D.C., area on Oct. 7 and has
> asked
> for feedback to be sent to secsac-comments [at] icann [dot] org.
>
> “VeriSign’s change appears to have considerably weakened the stability
> of
> the Internet, introduced ambiguous and inaccurate responses in the
> (Domain
> Name System), and has caused an escalating chain reaction of measures
> and
> countermeasures that contribute to further instability,” the
> committee’s
> critique said. “VeriSign’s change has substantially interfered with
> some
> number of existing services which depend on the accurate, stable, and
> reliable operation of the domain name system.”
>
> VeriSign’s new policy is intended to generate more advertising revenue
> from
> additional visitors to its network of Web sites. But the change has
> had the
> side effect of rewiring a portion of the Internet that software
> designers
> always had expected to behave a certain way. That can snarl antispam
> mechanisms that check to see if the sender’s domain exists, complicate
> the
> analysis of network problems and possibly even pollute search engine
> results. Because VeriSign will become a central destination for
> mistyped
> e-mail and Web traffic, its move also raises serious privacy questions.
>
> On Monday, domain name registrar Go Daddy Software filed a lawsuit in
> federal district court in Arizona seeking to halt the SiteFinder
> redirection. “VeriSign has hijacked this entire process,” Bob Parsons,
> president of Go Daddy, said in a statement. “When the user is sent to
> VeriSign’s advertising page, VeriSign gets paid by the advertiser when
> the
> user clicks a link to get off the page, to the tune of $150 million
> annually, as estimated by VeriSign.”
>
> It appears to be the second lawsuit filed in response to VeriSign’s
> move.
> Popular Enterprises, the parent company of search provider
> Netster.com, sued
> VeriSign over the SiteFinder redirection last week, alleging antitrust
> violations, unfair competition and violations of the Deceptive and
> Unfair
> Trade Practices Act.
>
> Also in response to VeriSign’s move, the well-respected Internet
> Architecture Board published on Saturday a document titled
> “Architectural
> Concerns on the use of DNS Wildcards,” referring to the domain name
> system.
> It says the danger of “wildcard records is that they interact poorly
> with
> any use of the DNS that depends on ‘no such name’ responses.”
>
> jeffrey kay
> weblog pgp key aim
> share files with me — get shinkuro —
>
> “first get your facts, then you can distort them at your leisure” —
> mark
> twain
> “if the person in the next lane at the stoplight rolls up the window
> and
> locks the door, support their view of life by snarling at them” — a
> biker’s
> guide to life
> “if A equals success, then the formula is A equals X plus Y plus Z. X
> is
> work. Y is play. Z is keep your mouth shut.” — albert einstein
>
>
>

]]>
3264
Verisign’s At It Again.. https://ianbell.com/2003/09/17/verisigns-at-it-again/ Wed, 17 Sep 2003 17:19:07 +0000 https://ianbell.com/2003/09/17/verisigns-at-it-again/ http://www.washingtonpost.com/wp-dyn/articles/A19860-2003Sep16.html

washingtonpost.com Software Aimed at Blocking VeriSign’s Search Program

By Anick Jesdanun AP Internet Writer Tuesday, September 16, 2003; 4:00 PM

NEW YORK — The developer of software that essentially guides Web surfers sought Tuesday to neutralize a controversial service designed to help users who mistype Internet addresses.

The Internet Software Consortium, the nonprofit organization that develops BIND software for Internet domain name directories, is writing an “urgent patch” for Internet service providers and others who want to block customers from a new Site Finder service from VeriSign Inc.

VeriSign, which keeps the master lists of names ending in “.com” and “.net,” launched Site Finder on Monday to steer users to likely alternatives when they type addresses for which no Web site exists.

Though VeriSign gets unspecified revenues from search engine partners whose technology powers Site Finder, company officials described the service as primarily a navigation tool to help lost Internet users.

Critics, however, say the service eliminates user choice, gives a private company too much control over online commerce and could violate longstanding Internet standards.

VeriSign’s service, which affects only “.com” and “.net” names, also overrode similar services offered by several Internet service providers, including America Online, and through Microsoft Corp.’s Internet Explorer browser.

The BIND patch allows AOL and others to restore control by identifying and then ignoring data from Site Finder, said Paul Vixie, president of the Internet Software Consortium.

When the patched software receives such data, it will instead pass along an “address not found” message.

“We’re making this patch available because our customers are screaming for it,” Vixie said.

Though running the software update is optional, Vixie expects many customers will. The consortium was testing the patch Tuesday and planned to release it by Wednesday.

VeriSign officials did not immediately return calls Tuesday. On Monday, its vice president for naming services, Ben Turner, said service providers were free to configure their systems so customers would bypass Site Finder.

BIND, a free product, is used by most domain name servers at service providers, corporations and other networks. Typically, those servers keep temporary copies of the master directories obtained from VeriSign.

VeriSign estimates that people mistype “.com” and “.net” names some 20 million times daily and cites internal studies showing users prefer navigational help over a generic error message.

Earlier this year, a suburban Washington company called Paxfire Inc. tested a similar service for “.biz” and “.us” names, but the U.S. government and a private oversight board asked Paxfire to suspend it after a few weeks pending a review, Paxfire chairman Mark Lewyn said.

A similar feature exists with “.museum” names. People who type in nonexistent addresses are offered an index of museum sites.

]]>
3255
Yahoo Buys Overture… https://ianbell.com/2003/07/14/yahoo-buys-overture/ Tue, 15 Jul 2003 03:36:31 +0000 https://ianbell.com/2003/07/14/yahoo-buys-overture/ http://story.news.yahoo.com/news?tmpl=story&cidR8&ncidR8&e=1&u=/ap/ 20030714/ap_on_hi_te/yahoo_overture 2 hours, 37 minutes ago

Add Technology – AP to My Yahoo!

By MICHAEL LIEDTKE, AP Business Writer

SAN FRANCISCO – Yahoo! Inc (NasdaqNM: YHOO -news ). on Monday snapped up Overture Services Inc. (NasdaqNM: OVER -news ), the pioneer of pay-for-placement online search results, in a $1.6 billion deal that fortifies the Internet powerhouse for a looming showdown with Google and Microsoft.

The cash-and-stock acquisition valued Overture at $24.82 per share — a 15 percent premium over the stock’s closing price last week. The price consists of $312 million in cash and 0.6108 Yahoo shares for each of Overture’s 65.7 million outstanding shares.

The deal’s value will fluctuate with Yahoo’s stock until its expected closing date in the fourth quarter.

Overture’s shares rose $2.54 to close at $24.05 Monday on the Nasdaq Stock Market, where Yahoo’s shares gained 1 cent to close at $32.20.

The acquisition continues a recent flurry of dealmaking in the lucrative business of online searching, a crucial axis on which much of the Internet’s utility depends.

By buying Pasadena, Calif.-based Overture, Yahoo gains control of one of its most important business partners and strikes a blow against Google and Microsoft.

A fierce rival of Google, which offers ad-based results distinct from its popularity-based search rankings, Overture now threatens to become more formidable by tapping into Yahoo’s greater resources, which included $1.1 billion in cash as of June 30.

Privately held Google, which provides some search results to Yahoo, declined to comment on Monday’s deal. Microsoft, whose MSN service, like Yahoo, has been collecting steady profits from Overture, was circumspect.

Lisa Gurry, MSN’s group product manager, said the software giant will make its next move after examining how Yahoo’s deal might affect its relationship with Overture.

Although Yahoo executives said they hope to maintain Overture’s existing alliances with partners such as MSN, it seems improbable that the rivals will want to subsidize each other, said Danny Sullivan, editor of the industry newsletter Search Engine Watch.

“This hurts MSN because Overture had been one of its best buddies,” Sullivan said.

MSN has been pouring more resources into online searching in an effort to become less reliant on services provided by outsiders. Besides relying on Overture for some of its search results, MSN also draws upon Inktomi, a search engine service that Yahoo acquired earlier this year for $279.5 million.

During the past 18 months, Overture has become increasingly valuable to Yahoo, prompting predictions that the two companies eventually would unite.

Overture has played a pivotal role in Yahoo’s recent financial revival, accounting for roughly 20 percent of Yahoo’s revenue of $604 million during the first half of this year.

Conceived by dot-com entrepreneur Bill Gross in 1997, Overture developed a search engine that sorts its results based on how much advertisers are willing to pay to be ranked under specific words.

Overture’s commercial database feeds search engines at popular Web sites such as Yahoo and MSN, which display the advertising links along with results generated by objective, algorithmic formulas.

Ridiculed just a few years ago, the so-called “pay-for-performance” concept has turned into an online gold mine. Pay-for-performance search is expected to generate $2 billion in revenue this year and U.S. Bancorp Piper Jaffray expects the lucrative niche will reach $5 billion in 2006.

Overture has cashed in on pay-for-perfmorance’s popularity, attracting 88,000 advertisers while generating earnings of $114 million since it first became profitable in the summer of 2001.

But the company’s success attracted more competition, most notably from Mountain View, Calif.-based Google, which has lured away pivotal partners such as AOL and EarthLink and spurred pricing concessions that have lowered Overture’s profit margins.

Although it followed in Overture’s footsteps, Google now has a slight edge over its rival in the United States. Domestically, Google’s network generated about 54 percent of all paid search results compared to 45 percent for Overture, according to market research compiled by comScore qSearch.

The competitive pressures prompted Overture’s management to lower its profit projections earlier this year and contributed to a downturn in the company’s stock, opening the door for Yahoo’s offer.

The deal supplements Yahoo’s recent acquisition of Inktomi with two other search engine services, AltaVista and Alltheweb.com, that Overture bought earlier this year for a total of $207 million.

Putting all those search engine tools under one roof is likely to create overlap, Sullivan said.

Yahoo executives believe all the services will help further its quest to overtake Google as the Web’s most popular search engine.

“We now own all the crucial elements of an end-to-end search offering,” Yahoo CEO Terry Semel said during an analyst call Monday.

Google continues to provide some of Yahoo’s search results. Semel declined to comment how the Overture acquisition will affect Yahoo’s relationship with Google. “I didn’t lay awake last night wondering about that,” Semel said in an interview Monday.

As a counter-punch to Yahoo’s moves, Microsoft seems more likely to acquire a search engine company, Sullivan said.

Potential candidates include Ask Jeeves Inc., FindWhat.com Inc. and, perhaps even Google.

MSN’s Gurry declined to comment on the company’s possible interest in Google.

]]>
3209
Anthony Cox’s Google Bomb.. https://ianbell.com/2003/07/10/anthony-coxs-google-bomb/ Thu, 10 Jul 2003 18:13:11 +0000 https://ianbell.com/2003/07/10/anthony-coxs-google-bomb/ http://www.guardian.co.uk/online/story/0,3605,994676,00.html

The war on the web

Anthony Cox describes how his spoof error page turned into a ‘Google bomb’ for weapons of mass destruction

Thursday July 10, 2003 The Guardian

I had always wondered how those viral emails or amusing web page addresses forwarded to me built up such momentum. Little did I know that I would be responsible for one of the most successful internet memes this year, and be accused of developing a so-called “Google bomb” of mass destruction.

In early February, I was reading online a Guardian article about Hans Blix’s problems obtaining cooperation in Iraq. Immediately after, I was confronted with the ubiquitous 404 error page, which usually tells the reader that a website is unavailable. With this serendipitous inspiration in mind, along with a text editor and some fiddling in a graphics package, I created a spoof 404 “weapons of mass destruction” error page . Saddam would have been proud; the page was deployed and operational well within 45 minutes.

After favourable comments from friends, I posted it in the newsgroup uk.rec.humour. Within the next 24 hours, the website had had 150,000 hits and had propagated to 118 newsgroups. By the end of February, it had received more than one million page impressions. Perhaps the ultimate accolade was having the original email come back to me with a note saying: “Have you seen this?” Visits declined throughout the subsequent war, and I suspected its 15MB of fame had passed.

Yet, suddenly, in the first four days of July I received nearly 4m page impressions, more than the previous five months combined. The reason? Typing “weapons of mass destruction” in Google and hitting the “I’m feeling lucky” button did not bring up Number 10’s “dodgy dossier”, but my spoof site. Suddenly, it was a lot funnier and accessible: even Google couldn’t find the WMD.

The first Google bomb was created by Adam Mathes in 2001 . He exploited Google’s page ranking system to return a friend’s website when the words “talentless hack” were used as a search term. He used a multitude of pages linking to his friend’s site, with the specific term “talentless hack”. Even though his friend’s site did not contain the search term itself, after calling upon others to insert such links into their sites, the Google bomb found its target.

Google’s page ranking treats links as votes for a website, and both the number and the importance of the link helps increase the ranking of a site. My site had steadily increased its ranking, including a link from the Channel 4 news website and the Guardian, but perhaps the majority were from personal pages, discussion boards and blogs.

However, this was not a deliberate attempt to use Google to make a political point. This Google bomb was slowly and unknowingly built, and only by chance coincided with the accusations that intelligence documents had been “sexed up”.

Last Friday, bloggers really picked up on it and it was the highest linked to page in weblogs according to Daypop.com . On Monday, however, a search for “weapons of mass destruction” sent you to a White House strategy document, which might be seen as a step forward for Google users and perhaps the White House.

Then on Tuesday my page was back at the top, so it may have been a glitch at Google, rather than a deliberate decision to drop the site.

This is a problem for Google: weblogs have been accused of causing “noise” in their searches. Instead of providing good original source material, reams of musings from bloggers are returned. The success of my WMD page underlines a problem Google needs to address. Sure it’s funny, but if you wanted documents on WMD, is that what you really expect from a search engine?

I have received about 200 emails from such diverse sources as United Nations Monitoring, Verification and Inspection Commission and serving soldiers in the Gulf. Even those critical of the perceived anti-war message thought it was funny. One of the more offensive messages called me a cowardly little boy and stated: “I am grateful to the almighty that not all Englishmen are slithering bottom-feeders.”

Ironically, I was not against the war, my views on the war being similar to those of journalist David Aaronovitch and MP Ann Clwyd. But if you are going to make a topical joke, then Bush is an obvious and easy target.

·Anthony Cox is a pharmacist at the West Midlands Adverse Drug Reaction Monitoring Centre and a teaching fellow at Aston University. He also writes a blog on drug safety at www.blacktriangle.org

]]>
3215
Bloggers ARE the Internet… https://ianbell.com/2003/06/25/bloggers-are-the-internet/ Wed, 25 Jun 2003 08:05:28 +0000 https://ianbell.com/2003/06/25/bloggers-are-the-internet/ http://www.guardian.co.uk/online/comment/story/0,12449,974523,00.html

Blogging’s too good for them

Paul Carr Monday June 9, 2003 The Guardian

Walking through the streets of Blogistan this week, I couldn’t help noticing a certain tension in the air. The natives were restless. The saloon bars were abuzz with nervous chatter. And it wasn’t about Buffy the Vampire Slayer. Something was most definitely up. But what? And who was this Eric Schmidt fellow that everyone was talking about? And why did I seem to be the only person in the world without his own weblog? Questions, questions.

Well it turns out that Schmidt is the CEO of Google (who knew?) and, if rumours are to be believed, he has plans to move weblogs out of the search engine’s main index and into a separate, less highly trafficked directory. What an absolute cad. Or at least he would be if the rumours weren’t just speculation – the result of an enthusiastic leap of blogic by IT news site the Register, who suggested that when Google launches its new weblog search tool, it may also decide to purge bloggers from its main database. Possibly.

No need for ordinary Blogistanis to panic just yet then – but the rumours did give internet experts an excuse to get all het up about the undue prominence of weblogs in Google search results. No matter what you search for – celebrity gossip, weapons of mass destruction, insect recipes, donkey porn – you can bet your bottom dollar that above the research papers and official news sources you’ll find a load of bloggers putting in their two pennyworth.

“Foul!” cry the blogger haters, “these two-bit amateur diarists are taking over the internet – it’s time we shoved them off into their own search engine, where they can do no more harm.” Just imagine… no more illiterate teenage wannabes clogging up the world’s most popular search engine with their idiotic “which Sex And The City character are you?” quizzes and incestuous links to their mates. No more American neo-Nazis babbling on about the Dixie Chicks and inciting racial hatred. No more tree-huggers talking about henna tattoos, home schooling and tofu. Just a list of proper sites full of proper information, written by proper journalists and proper academics. Fantastic. And if people want to hang out with Joe Blogs then fine, they can just click the appropriate tab and wallow until their brains turn to mush.

The only slight problem is that, despite what some commentators would have you believe, bloggers are not the scourge of the internet. In fact they are the internet. The whole point of the web was to allow anyone, regardless of budget or influence, to share information with the rest of the world. It certainly wasn’t supposed to be a giant electronic shopping mall or an interactive brand extension for major broadcasters and publishers.

Also, there seems to be an assumption that all weblogs are pointless, self-absorbed amateur journals that can be lumped together under a single search tab. This despite the fact that an increasing number of high-profile journalists and publishers are using weblog software as an easy and cost-effective way to deliver first-rate, original content to thousands – or even millions – of readers. Take Salam Pax, the Iraqi who has just been recruited by this newspaper on the strength of his wartime weblog.

While my favourite tabloid columnist, Tony “idiot” Parsons spent the conflict in front of his computer bashing out page after page of laddish nonsense for the Mirror’s unique readership of warmongering peaceniks, Salam was in Baghdad, using his blog to drive home the realities of war to a vast international audience. And yet, if the haters had their way Salam would be dragged off into the bloghetto while Parsons remained a free man. What kind of justice is that?

Do they really believe that it’s possible to separate the web into legitimate information sites (good) and weblogs (evil) or that by purging bloggers from Google, the internet will suddenly become more relevant and more useful? Not only is this hilariously simplistic but it’s also diverting attention from the real problem – that the web is drowning in a sea of crap, created partly by the less literate webloggers but also by biased media outlets, hate groups, pointless personal homepages, porn sites, multilevel marketers and out and out loons.

If Google really wants to improve its service then it should forget about trying to treat bloggers as one homogenous, problematic group and start developing intelligent search robots that are capable of separating the wheat from the chaff across the entire web. These robots should: a) look at the actual content of a site and decide whether the content is useful and worth reading, b) group it together with other relevant sites to give surfers a comprehensive overview of all the available information on whatever subject they’re interested in and c) ensure that these handy packages of links and information appear at the top of the search results, above all the unfiltered rubbish.

A utopian technological fantasy? Not really. In fact these robots already exist. They’re called webloggers. And without them Google’s index would be a much poorer place.

· Paul Carr is editor of The Friday Thing (www.thefridaything.co.uk). His new print publication, The London News Review, launches in August

]]>
3207
2002: The Year In Technology https://ianbell.com/2002/12/27/2002-the-year-in-technology/ Fri, 27 Dec 2002 09:14:31 +0000 https://ianbell.com/2002/12/27/2002-the-year-in-technology/ http://www.newscientist.com/news/news.jsp?id=ns99993215

* 2002: The year in technology*

09:00 25 December 02

Will Knight

The entertainment industry upped its attack on the internet file-sharing in 2002 by introducing new and controversial “copy protection” technologies to prevent computer copying of music and movies.

The year began on a sour note when the company behind the Compact Disc standard, Philips, publicly condemned <“>http://www.newscientist.com/news/news.jsp?id=ns99992271> in certain Macintosh computers, causing them to crash and refused to reboot. A piece of sticky tape or a marker pen was then shown to be enough to defeat another protection system <“>http://www.newscientist.com/news/news.jsp?id=ns99992464> file sharing networks and connected computers to disrupt infringement. The plans have caused outrage and prompted some researchers to develop pre-emptive countermeasures <saw technological developments that promise to keep computer systems more secure. In May, the first ever commercial quantum encryption device was unveiled by Swiss company id Quantique. By exploiting the quantum properties of photons to transmit information, quantum cryptography can deliver unbreakable encryption keys.

In October, researchers at the UK’s defence research agency QinetiQ demonstrated the same trick through thin air, firing a stream of quantum bits <.”>http://www.newscientist.com/news/news.jsp?id=ns99993114>. In the same month Austrian researchers demonstrated the first quantum calculation <,”>http://www.newscientist.com/news/news.jsp?id=ns99991893>, made from a single carbon nanotube, was revealed. With a diameter of only 75 nanometres, the instrument can measure the temperature change that occurs when a few molecules react with one another.

The endlessly versatile carbon nanotube was then shown also to have an explosive side <“>http://www.newscientist.com/news/news.jsp?id=ns99992389> of computer storage beyond current limitations.

*Number cruncher*

At the other end of the computing scale, meanwhile, the race to build the world’s most powerful scientific supercomputer gained momentum. In April, Japan’s Earth Simulator at the Marine Science and Technology Center in Kanagawa was crowned as the new supercomputing world champion <“>http://www.newscientist.com/news/news.jsp?id=ns99993080> over the next three years.

2002 also saw the first match between a world chess champion and the world’s leading computer player since another IBM computer, Deep Blue, defeated Gary Kasparov in a controversial match held in 1997.

In October, the current world champion Vladimir Kramnik took on <.”>http://www.newscientist.com/news/news.jsp?id=ns99992947>.

One of the more bizarre and controversial technological breakthroughs of the last year involved harnessing a different kind of non-human intelligence. In May a team at the State University of New York implanted radio-controlled electrodes in rat’s brains to create the world’s first radio controlled automaton <“>http://www.newscientist.com/news/news.jsp?id=ns99992200> to 56.6 million, placing the country behind only the US in terms of internet use. And with a total population of over one billion, China could have an online population of around 257 million by 2005.

The Chinese government also increased efforts to control use of the internet in 2002. In September, the government prevented surfers behind the country’s “Great Firewall” from accessing the search engine Google, which caches many restricted sites. But a reversed version of Google called elgooG <.”>http://www.newscientist.com/news/news.jsp?id=ns99992449>.

While Microsoft claims this will put security first by controlling what software can be run on a computer, critics allege it could be used <4086