Okay, some definitions first:
- “Known bad” strategy implies covert collection of attributes used by the fraudsters – first of all devices, but also email addresses, phones etc. – in order to be able to detect repeat usage of them. It’s essentially blacklisting technique, implying that if you are not blacklisted, you are good to go.
- “Known good” is pretty much the opposite – it’s an overt policy of collecting the attributes – first of all devices, but also email addresses, phones etc. – to have necessary assurance of the legitimacy of their usage by the good guys. It’s effectively white listing, implying that if you are not whitelisted, you are a potential suspect. Naturally, to make an attribute whitelisted (or to mark it as ‘trusted’), the users will have to go through a certain verification process. For example, to whitelist a machine – the user will have to enter a code sent via email or SMS (essentially, following a 2FA approach).
Now, traditional strategy adopted by the cyber security guys has always been the first one – just like in “offline” life where we all enjoy presumption of innocence (unless we slide into totalitarian form of government) and where the “blacklists” are for few suspected criminals. It definitely is more intuitive and, to a certain degree, effective way of raising the bar in the online security. However, it becomes increasingly inefficient as fraudsters get more sophisticated in hiding their identity. Indeed, only lazy or grossly uneducated fraudsters do not delete their cookies (historically, number one way of identifying a device) today. Adobe’s FSO – which succeeded the cookie – is next to fall. Soon the larger fraudster community will discover the beauty of sandboxing. In essence, it’s a matter of appropriate tools being developed and available on the “black market” – average fraudster doesn’t even have to know all the gory details to use them. Thus, as I mentioned in my previous post, device fingerprinting is pretty much doomed.
By contrast, the “known good” strategy is increasingly getting traction in the online businesses. Initially unpopular since they introduce another hoop for the legitimate users to jump through (businesses hate that), it just by definition works much better. Fraudsters now need to get an access to the victim’s email account, cellphone, or hack the computer to get around it (it should also be mentioned that on a conceptual level the superiority of whitelisting over blacklisting is apparent in many other cases – such as in keeping user input under control).
The switch to “known good” is not a painless exercise and, yes, it introduces an additional hurdle to the business, but it may prove to be the cheapest way of putting a dent on losses by making account takeovers much more difficult to hide. Both in terms of nuisance to the users and the cost it fares much better that some extra measures I see on many websites – such as selecting an image, asking additional questions etc. – thus my take is that the popularity of “known good” approach will continue to rise.