Google Does Not Have Whitelists for Panda and Penguin
February 13, 2015
A recent Google hangout session with John Mueller, who is one of the leading names in Google’s search division and well-known amongst the community that hangs out in the Google Webmaster forums, has revealed that the company does not maintain a whitelist of websites as it relates to the Panda and Penguin search algorithms.
For these purposes a whitelist is a list of websites and URLs that are exempt from the algortihms and thus unaffected by them. In practice this would mean that a website would be able to avoid link penalties via the Pengui algorithm or poor content penalties as per the Panda algorithm.
Rumours had been floating around for a while about possible exemptions, which is why Mueller felt the need to elaborate on the issue. He was also supported by Matt Cutts, who also claimed that neither algorithm maintains what he called an exemption list.
That isn’t to say that the company doesn’t have whitelists at all. The company has previously stated that a few of the algorithms that make up their overall search engine have whitelists attached to them, however Mueller would not go into further details about which algorithms had them or which URLs had been placed onto the lists.
Mueller commented: “For the most part we don’t have any special whitelist where we can say, well this website is actually okay, therefore we will take it out of this algorithm.
For some individual cases we do that. So it depends on the algorithm.
So for a lot of the general search algorithms we don’t have that ability but for some individual algorithms we do need to be able to kind of take manual actions and say well…
For example the Safe Search algorithm is picking up on these words on this website as being adult. Adult website similar but actually they’re talking about, I don’t know, animals or something completely unrelated.”
Video of the statement can be found below and is recommended viewing for anybody interested in search;
So in practice the idea is actually a lot less sinister than it perhaps appears on the surface. After all, it is entirely plausible and quite fair that the company would be making exceptions for something like the Safe Search algorithm if it is throwing up results that are actually not related to the content that is on the site that it is looking to filter.
If anything it shows the amount of work that goes into creating these algorithms in addition to the work that still needs to be done and is being done to improve them. Of course if similar exemptions were offered for the Panda and Penguin algorithms there would likely be more of an uproar as it would be hard to justify in a similar way to the Safe Search algorithm. Happily it seems that the issue can now be put to bed.