Life can be cruel. You work hard to create new and compelling content for your site. You’ve studied legitimate SEO techniques. Everything is going well and your site is getting decent page rank scores across the board. Then it hits. A search engine penalty comes out of nowhere and knocks your site out of the index. Talk about a bad day.
To be honest, most webmasters who get penalized know why it happened. They’ve used a myriad of...
Read More
Bing Webmaster Blog - Page 21
One of the best parts of publishing online is that, on the Web, anyone can have a world-wide reach. But while being global is made easy on the Internet, ensuring that the content you produce will be found by the right audience can be a real challenge. Search engines can have trouble understanding geotargeting because of a few technical limitations. These include:
Search engines may not be crawling your site from the location of your customers...
Read More
One of the most common challenges search engines run into when indexing a website is identifying and consolidating duplicate pages. Duplicates can occur when any given webpage has multiple URLs that point to it. For example:
URL
Description
https://hubu5cn4ze.proxynodejs.usequeue.com/
A webmaster may consider this their authoritative or canonical URL for their homepage.
https://vjbyp0ytgl.proxynodejs.usequeue.com/
However, you can add ‘www’ to most websites and still get the...
Read More
Everyone knows great content is fundamental to the success of your site. It is the reason people are looking for your site and it is why they will stay. In the previous three posts from this series, we have discussed how large sites need to expose fewer URLs, how you can help search engines crawl more efficiently, and how not to unintentionally hide your content from crawlers.
Read More
Just recently a strange problem came across my desk that I thought was worth sharing with you. A customer notified us that content from a site she was interested in was not showing up in our results. Wanting to understand why we may or may not have indexed the site, I took a look to see what the problem was and stumbled upon an interesting but a potentially very bad use of the robots.txt file. The first visit I made to the site had a very standard...
Read More
As a member of the Live Search Webmaster Team, I'm often asked by web publishers how they can control the way search engines access and display their content. The de-facto standard for managing this is the Robots Exclusion Protocol (REP) introduced back in the early 1990's. Over the years, the REP has evolved to support more than "exclusion" directives; it now supports directives controlling what content gets included, how the...
Read More
Today we’re pleased to announce an update to the Sitemaps Protocol, in collaboration with Google, and Yahoo! This update should help many new sites adopt the protocol by increasing our flexibility on where Sitemaps are hosted. Essentially, the change allows a webmaster to store their Sitemap files just about anywhere, using a reference in the Robots.txt file to establish a trusted relationship between the Sitemap file and the domain or...
Read More
Today we’re pleased to announce several improvements in the crawler for Live Search that should significantly improve the efficiency with which we crawl and index your web sites. We are always looking for ways to help webmasters, and we hope these features take us a few more steps in the right direction.
HTTP Compression: HTTP compression allows faster transmission time by compressing static files and application responses, reducing...
Read More
Today we’re happy to take the “private” label off our Webmaster Tools beta, and open it up to all webmasters and SEO professionals. This is a first step by Live Search to make it easier to ensure your site is getting the best crawling, indexing, and representation in Live Search results possible. Our goal is that it will lead to better traffic for your site, and better search results for our customers.
In addition to the tools...
Read More