A WebmasterWorld thread cites a new research paper published by Microsoft Research named Augmenting Web Pages & Search Results For Improved Credibility (PDF).
In short, how does or could Bing determine the credibility of a web page based on the content and signals surrounding a specific web page. And if they can determine the credibility of a web page, how can they use that for search engine rankings in Bing search results?
Here is the abstract:
I was going to go through and pull out the metrics Microsoft covered in the paper, but I do not have to do that. Bill Slawski already did that at his blog, SEO By The Sea. He summarized:
Here are some of the “credibilty” signals that that they looked at:
On-Page Features
Spelling ErrorsNumber of s on a pageDomain Type (.com, .gov. etc.)
Off-Page Features
Awards and Certifications, such as the Webby Award, Alexa Rank, Health on the Net (HON) awards.Toolbar PageRank, and Rankings for Queries used in generating their data setSharing information, from sites like Bit.ly and other shortening sites, Likes and shares and comments and clicks from Facebook, Clicks from shortened URLs on Twitter, bookmarks on Delicious.
User Aggregated Non-Public Data from Toolbar Usage
General Popularity – unique visitors from usersGeographic Reach – number of visitors from different geographic regionsDwell Time – amount of time users kept a URL open in their browser (as an estimate of how long they might have viewed a pageRevisitation patterns – how often people revisited a page, on averageExpert Popularity – the behavior of people who have been shown to have an expertise in a pariticular field, and user data about their visits to pages in those fields.
Very nicely done and Bill goes through it even more deeply at his blog.
Forum discussion at WebmasterWorld.