“Unexpected” challenges applying machine learning in fraud detection

A story came out recently about super-sophisticated self-driving cars being easily duped by relatively simple tricks used by some hackers. On the surface it seems to be shocking, as the brightest minds of top engineering companies have been working hard on making the promise of self-driving cars a reality – and triggering a true revolution in our day-to-day lives. In fact, this is hardly surprising. The algorithms, and the signals they were relying on, were probably never trained to resist an active sabotage. They were merely trying to replace human beings in routine activities, just like they do in other areas such as language translation or image recognition. In ‘non-adversary’ circumstances, the performance of the algorithm can be steadily improved over time. Once you achieved a certain reasonable threshold (e.g. detecting objects in pictures), you are not going to slip back into not recognizing them even if you stop adding more features or bigger training data sets.

With fraud you are dealing with a different animal – the patterns you are trying to detect are actively trying to hide from you. Your successful detection yesterday doesn’t guarantee the same performance tomorrow. As famous security expert Bruce Schneier once noted “Attacks never get worse, they only ever get better.” And they do evolve, change, adapt and advance in quite unexpected ways.

Does this mean machine learning is ultimately powerless against the human creativity directed against it? Of course not. It is is being successfully used to detect online fraud in top-tier financial and business institutions, some with spectacular results. Not to mention select human-vs-machine clashes, such as games of chess or Jeopardy!, where ML algorithms actually proved to beat the best in kind human experts. However, to achieve consistent results in practice, one should keep the following in mind:

  • Continuous learning that relies on fresh data is imperative. You are essentially teaching the algorithm to detect a constantly moving pattern. The models will easily degrade over time if they stay intact.
  • Consistent investment into ever-more sophisticated features is also non-negotiable. Throwing more of the same data (going back into history) is not going to help much. Squeezing more juice from the same data has its natural limits, too. The world constantly evolves and so should your features (in the self-driving car hacking example, the ‘feature’ itself was actually compromised)
  • Typically, no one solution would suffice to cover the entire (again, constantly evolving) fraud landscape – thus proper investment is necessary into “plumbing” which would enable complex execution plans such as multi-tier decisioning, running in parallel, and applying different modeling techniques.

Back to the self-driving cars. Making them robust in the face of attempted sabotage will prove to be much more costly and complex exercise, but it is nevertheless needed to make them compete with human-driven quality (even while the latter itself is getting more vulnerable). Realizing the differences in ‘classic’ machine learning practices vs. those aimed to fight active fraud/sabotage is going to help along the long road ahead…

Why face recognition as a way to replace passwords will remain a fantasy

faceprintReplacing much hated (yet resilient) passwords with face recognition-based authentication has been a cool idea of ‘how things will work tomorrow’ – yet ‘tomorrow’ in terms of massive adoption never really happened. Some may argue that the stars – were not really aligned till now, but may be aligned very soon. Indeed, facial recognition methodology (naturally) keeps getting better. User-facing cameras (which just several years ago were limited to PCs equipped with an extra webcam) are now getting increasingly omnipresent – from laptops to tablets to smartphones. And the pain of remembering passwords keeps getting worse. The idea is pursued by variety of smaller companies like KeyLemon or Sensible Vision, and face recognition features even made it into Android mobile OS. Moreover, as recently as last month no one else but formidable Jack Ma demonstrated how Alipay may allow payment authorization exclusively via user’s face recognition.

So… tomorrow of “authorize with a ‘faceprint’” is finally happening? I venture a prediction that it will never graduate from a cool concept to widely acceptable practice. I can mention at least two reasons why:

  • As with any other authentication mechanisms, it’s going to be a cat-and-mouse game – the authentication technology will get better only to be defeated by ever-creative fraudsters. In cases when the attackers are inherently capable of moving faster than the defense, the ‘cat’ is kind of doomed. We could reach a point – just like it happened with captcha – when building more defenses may become unfeasible. How does it apply to the face recognition domain? The weakness of using face recognition for authentication purposes is nothing new – e.g. these guys nailed it back in 2009. True, the recognition software improved a lot since then, and some interesting ideas like detecting moving eyeball or blinks may offer a chance, but then again attacking these defenses to fool the software into false positives is becoming cheaper on a faster pace (3D printed masks, colored lenses, video-generated images?).
  • Any change in consumer behavior on a massive scale would need a push from a very large player interested in making money on it – such as Apple (case in point: mobile payments). Apple is hardly going to do it though, as its newest devices already have fingerprint readers. While fingerprints arguably suffer from the same issues, they are much more resilient biometrics – fingerprints are way harder to obtain than pictures of the potential victims (even taking this claim into account). Moreover, if we combine this observation with dropping price of fingerprint readers, envisioning even cheaper devices having one in near future is easier than imagining face recognition used as main biometrics to identify the end users. In addition, cameras can be used to scan your fingerprints instead of your taking a picture of your face. There’s little evidence that other large companies would have enough incentives to go against this trend.

Having said that I can see how ‘faceprint’ can be used as one of choices of a biometric 2nd factor, or in some physical stores which would like to appear futuristic to its customers. Maybe even some airports. Wide adoption however may remain as ‘the cool feature of tomorrow’.

What’s the biggest threat to Bitcoin’s future?

Bitcoin is making headlines on almost a daily basis – startups ranging from currency exchanges to ‘virtual wallets’ keeps multiplying. Developments such as the first bitcoin ATM machines and support from large cap internet company CEO are all boding well to the future of the ‘internet independent currency’. What could threaten its future, one might ask?

It’s certainly not the technology. In fact Bitcoin’s design cleverly protects it from brute-forcing using ever-increasing processing power of the “miners”. It probably is not its vulnerability to machinations and market price fluctuations – these typically die away as the market stabilizes. It may not even be the regulatory instinct of the governments.

Instead, the biggest threat may be hidden in some of its core features such as anonymity and lack of centralization – something that may attract disproportionally higher usage for shady transactions than legitimate ones. That may eventually lead to Bitcoins being increasingly viewed as the tool mostly serving criminals and terrorists and eventually forcefully outlawed.

True, any new technology is prone to abuse. Music and movies recording on magnetic tapes gave boost to pirating (further exploding in internet era). Ease of communication enabled spreading on a vast scale – anything from homophobic propaganda to child pornography. Fast international funds transfers made money laundering lot easier. Readily available encryption tools hugely complicated lives of anti-terrorism agencies. And so on. But in all cases the nefarious usage of the new technologies has been vastly outweighed by perfectly legitimate ones – as the benefits also attracted huge segments of legitimate population. Hence “containing” these technologies historically proven to be problematic and economically unfeasible (bar “great firewalls” built by some autocratic states).

How is Bitcoin different? Well, it certainly does offer benefits to the average Joe – such as anonymity and an alternative inflation-proof tool for investments for future. But then again, how important it is to have an anonymous transaction vs other areas where anonymity matters – such as internet browsing (TOR or VPN)? Or how much would you invest into Bitcoin currency as an alternative to 401K? The fact that Bitcoin hasn’t been embraced by the millions – yet – may actually mean these benefits may not be enough for their common adoption.

By contrast, the criminals are very quick to ‘jump’ on the new currency – one could argue it’s a perfect solution to securing the most vulnerable component of the fraud rings – monetization. The best example of it is the recent outbreak of CryptoLocker virus which encrypts all your files and subsequently deletes them unless you pay the ransom – in Bitcoins, of course. Quick research on the web confirms so far that this strategy is actually working – thousands of victims ended up paying their way out. For most of them it’s the first time they are exposed to Bitcoin – and let’s face it, it’s not the most pleasant introduction.

How would BitCoin address this threat is unclear – breaking its architecture to provide some level of oversight may not be feasible. The key may be in attracting more legitimate usage of the technology – which is without doubt growing really fast right now – but the question still remains – is adoption of BitCoin by legitimate users going to outpace the exploding popularity of it among criminals?

Password Haystacks

In recent months the “dead horse” of password-based authentication got some new life in the form of so-called ‘password haystacks‘. An approach introduced by well-known security expert (and one of my favorite gurus) Steve Gibson relies on the knowledge of the logic used by password brute force attackers. In essence the attackers – after trying a list of well-known passwords (“password”, “123456”, “cat” etc.), their variations (“pa$$w0rd”) and finally plain dictionary – switch to ‘pure guessing’ when arbitrary combination of alphanumeric characters and some special signs is generated and tried methodically until the password is guessed. Hence the “brute force” nature of the attack. So far the best prescription for passwords was to make them both random and very long – an advise routinely ignored by the users community as it made such passwords extremely hard for humans to remember. What Steve came out with is that passwords with similarly high “strength” (i.e. resistance to guessing) could be created by artificially increasing their length (each added character increases time needed to crack it exponentially) and the space of characters used in them (the bigger variety of small, capital case, number and special characters is used the more combinations are possible – again drastically increasing the cracking time) by, say, prepending or appending some easy-to-remember “padding” to passwords. For example, ‘000Johny000’ is infinitely harder to brute force than ‘johny’ – yet it requires comparable effort for humans to remember them. Makes perfect sense – you come out with your own secret “padding” pattern, and use it to enhance your simple but consequently easy-to-guess passwords. Once enhanced such passwords are both easy to remember and hard to crack (get more detailed explanation from the source here). Sounds like a perfect solution, huh?

Up to the point. While the “haystack” approach certainly adds to the password-based security – it is hardly the end of the game. Like anything else in security, password attacks are never ending cat-and-mouse game between the ‘locks’ and the ‘keys’. Thus it’s a matter of time till fraudsters update their password guessing algorithms/tools to check ‘popular padding’ patterns first before switching to ‘pure brute forcing’. Not to mention the possibility of ‘leaking’ your password in some other way (e.g. through phishing site) thus revealing the “secret sauce” of all your strong passwords – the “padding pattern” – to the attackers.

At the end of the day, as often mentioned in the past, passwords as viable protection mechanism are pretty much dead (mostly). Indeed, other approaches like multi-factor authentication have no real alternatives no matter what clever way we come out to make our passwords less guessable.

Automated spear phishing – a perfect storm?

Back in January one of my 2011 predictions for “cyber fraud story of the year” was having more targeted yet massive phishing attacks. Two biggest news trends in cyber security seem to be indicating that this threat can actually become real in 2011:

  1. highly effective attacks targeting what one would expect to be the most impenetrable companies whose bread and butter is cyber security – RSA and Oak Ridge National Lab. The frequently used term to describe these attacks is “Advanced Persistent Threat” – but in reality what hides behind is a successful spear phishing attack.
  2. repeated exposure of massive amounts of user personal data – names, emails, addresses, and in some cases even dates of birth, credit card numbers (!) and SSN (!!). Just a couple of breaches in recent months exposes the scale of the problem:

Spear phishing attacks have always been considered a highly targeted version of a cyber attack tailored to the potential victim’s profile (root – phishing with a ‘spear’ rather than a ‘wide net’). RSA and Oak Ridge National Lab breaches are yet another confirmation of the efficiency of such attacks. Typically targets of spear phishing attacks are senior executives (sometimes spear phishing is referred to as ‘whaling’ for that particular reason) or companies which represent a hefty prize to the fraudsters community.

Could usually hand-crafted spear phishing attacks be automated and put on a massive scale? I don’t see why couldn’t they (most probably to some extent they already are). As common knowledge in the industry goes, a simple addition of victim’s name in the phishing email’s opening line drastically increases the probability of the end user trusting the message (and then clicking the link). Add to it the knowledge of the companies the victim has an established relationship with, the phone (BTW, has anybody thought of automated phone attacks?), address – and the attack can be personalized to a degree that an ‘average Joe’ stands no chance of distinguishing it from the email communication coming from the real business.

To be sure exposure of user data in itself is a very dangerous phenomena. In addition to “old-fashion” identity theft, stolen user data can be applied in other types of attacks – such as password guessing (your name is John and you were born in 1970? Chances that you use one of ‘john1970’, ‘Johny70’, ‘JOHN70’, etc. are infinitely higher than a dictionary-based random gibberish). However, marrying phishing attacks with intimate knowledge of victim’s data may prove to have the most severe and widespread impact.

What will happen when spear phishing goes massive? Hopefully, it’ll speed up the adoption of well-known counter-measures. For businesses – discipline storing user data and adoption of 2FA. For end users – a practice of using different passwords across different sites (should be as weird as using the same key for unlocking your house, car and the office), not clicking on links in your emails (should be as weird as opening your door to a stranger) and keeping your personal data away from the rest of the World.

The best cyber security practices are…

…the ones which don’t expect any action or assume any expertise from the end user. Naturally.

I did try to make a case for ‘no substitution for user education’ several years ago. However, clearly, with explosive penetration of Internet being as ubiquitous and essential service as phone or even water & electricity the prospect of having a security-savvy user base – capable of understanding the difference between HTTP and HTTPS, or paypal.com and paypal.abc.com – keeps getting further away. Indeed, the answer to growing cyber fraud threat cannot rely solely on an assumption of average netizen’s abilities to detect and fight back the ever sophisticated attacks from the bad guys. Continuing the analogy with physical security it’s equivalent to saying “let’s assume all good guys have a gun and know how and when to use it to defend themselves”. This strategy might have worked in the Wild West (if it did), but has poor chances in the 21st century’s Cyber World  (sorry, NRA).

Not surprisingly, the industry slowly but surely moves towards, let’s call it, “built-in security”. The shift in mindset could be characterized by security considerations becoming more of a driver and less of an afterthought.

For example, it’s well known that many users chronically fail to patch their computers – operating systems and applications (browsers, PDF readers, Java VM, etc.). That leaves them wide open to ‘exploits in the wild’ – inevitably resulting in data being stolen, machines being infected and getting ‘enlisted’ to a botnet. In order to address this situation more companies are switching to ‘stealth update’ mode. For instance, unlike its competitors, Google’s Chrome chooses not to ask the user to initiate an update – it does it silently without users even knowing it. Windows 7 seems to adopt the same approach – by default the users are not asked to perform any action to have their operating system to be patched.

The same rule applies to other security measures. Facebook recently introduced a nice feature enabling switching its traffic to HTTPS. Alas, the option is off by default and the 600 mln users are expected to go to their account settings and turn it on manually (most probably Facebook was afraid of the cost of wholesale movement to HTTPS). Again, Google shines here. Not only it moved all its gmail service to HTTPS well before Facebook did, it also made it universal and by default – no user action was expected. I bet vast majority of gmail service users didn’t even notice the change. Another less known example is also recently introduced Strict Transport Security which allows web servers to prevent non-secure (or even suspicious) connections in order to prevent man-in-the-middle attacks. Again, “average” users need not to even know the mechanism exists.

These trends are bound to gain momentum. I imagine more and more companies will switch to HTTPS in the near future, and patching will not require user confirmation by default (perhaps leaving an “ask me first” before updating option – off by default – for tech-savvy – or perky – users). More services will move away from simple password-based authentication. Microsoft Essentials will become an integral part of the Windows OS (if anti-trust allows them to do so). Applications will become increasingly sandboxed. And so on…

This is not to say that one day you will be able to survive in the Cyber World without some basic knowledge and prudence – just like you need some common sense to live everyday life – from how to cross the street to avoiding dangerous neighborhoods. However, that knowledge should be kept to minimum, be intuitive, be transparent and belong to public domain and even school (kindergarten?) curriculum.  In the end the rules should be simple enough that – unless you are striving for the Darwin Award – by following them you are not risking your (cyber) well-being. The rest should be taken care of by the smart technology. Ideally.

Cyber security trends for 2011

Well, it’s this time of the year again. Scores of well-known gurus, security companies as well as some simple mortals come out with their prediction on how the cyber fraud will evolve in coming 12 months. Sometimes these “prognosis” is limited to attaching “security threat” or “attack vector” to general emerging technologies – e.g. “more fraud on smart devices”, “cloud security threats” etc. – such predictions are based on common principle of any new functionality is a potential security threat, and the fraud attempts are proportional to its popularity. Naturally, like any generalization, this approach has its limits… indeed, if a new functionality proves to have a higher bar for penetration than the existing ones, the fraudsters will happily stick to the old known methods without complicating their lives.

Having said that, I couldn’t resist the temptation myself – and came out with some prognosis of my own:

  • Trojans will become more mature and deadly. User machines are becoming both Holy Grail and the Weakest Link in the defense against the cyber criminals. With the client machine compromised most of the server-side anti-fraud technologies are useless – even in some cases 2FA may be circumvented (naturally, this is true for client-side attacks like XSS or XSRF). There’s little hope that a remedy is within reach – the trend of fraudsters to shift their attention from relatively hardened OSes to application layer (such as browser plugins, but also stand-alone ones like PDF reader) will continue to grow in 2011 resulting in a race which good guys may not be able to win.
  • Phishing – i.e. tricking netizens to reveal their passwords, PII, SSN, and other information – the problem is going to get more severe – taking spear attacks to mass production. Indeed, taking into account the volume and availability of mass information (enough to mention alleged 100 mln Facebook accounts information put on torrent) it’s only a matter of time before massive old-style phishing attacks (with the low success rate of around 0.1-0.3%) become more personal and targeted and thus much more effective (success rate may jump to 1-3%).
  • Information Security – how long it’ll take governments and corporations to move to close environments – with machines which have no burnable DVD drives or USB ports, hard drives living in clouds and isolated access to the public net (not even mentioning having our smartphones banned at workplace – as we could still take a picture of the screen and email it right away?). My take – forever. So WikiLeaks will continue making headlines and more copycats of it will proliferate in 2011.
  • IPv6 – most probably 2011 will be the first year where IPv6 starts to be used in wild (as IPv4 free space will finally be depleted). Taking into account general procrastination of big businesses (for whom security is an afterthought until it bites them in the a*s) they are going to be less prepared (to put it mildly) to the transition to IPv6 than the fraudsters community. Now imagine all the IP3 filters, IP geolocation and other techniques which became mainstream, all the infrastructure tuned to IPv4 built on back-ends of the companies start behaving “strange” as soon as requests come in with IPv6 addresses. Subsequently, if these requests prove to be more effective in hiding fraud, guess how much (or little) time fraudster will need to jump on the opportunity.
  • Smartphones – if anything, Android – being inherently more open platform than iPhone OS – but overall I do not think we’ll witness any spectacular security breaches (including using smartphones as tools to commit fraud) because of obvious smartphones proliferation; generally speaking they are safer than our desktops and laptops, harder to get by, harder to infect and inherently easier to locate (tied to a geolocation).
  • Cloud computing – if anything, it’ll be increasingly leveraged by the bad guys to achieve their nefarious goals, rather than having breaches itself (e.g. stealing data from the cloud). Not that it’s impossible, I just think there are more available and easier to access means.
  • Virtual currency – as much as it’s volumes are going through some spectacular growth period, there’s a conceivable ceiling to their expansion, and so for the associated fraud. I don’t think that they will become the Big Story for 2011, although the fraud will grow proportionally to the volume of virtual goods and services.

All the above is more intuition than science, and naturally only time will show how right or wrong I am now (fortunately, we don’t have to wait too long). Plus, many reputable specialists would disagree with my relatively low risk ranking of smartphones, clouds and virtual currency – which makes it even more intriguing and worth looking forward to.

Superiority of the “known good” over “known bad”

Okay, some definitions first:

  • Known bad” strategy implies covert collection of attributes used by the fraudsters – first of all devices, but also email addresses, phones etc. – in order to be able to detect repeat usage of them. It’s essentially blacklisting technique, implying that if you are not blacklisted, you are good to go.
  • Known good” is pretty much the opposite – it’s an overt policy of collecting the attributes – first of all devices, but also email addresses, phones etc. – to have necessary assurance of the legitimacy of their usage by the good guys. It’s effectively white listing, implying that if you are not whitelisted, you are a potential suspect. Naturally, to make an attribute whitelisted (or to mark it as ‘trusted’), the users will have to go through a certain verification process. For example, to whitelist a machine – the user will have to enter a code sent via email or SMS (essentially, following a 2FA approach).

Now, traditional strategy adopted by the cyber security guys has always been the first one – just like in “offline” life where we all enjoy presumption of innocence (unless we slide into totalitarian form of government) and where the “blacklists” are for few suspected criminals. It definitely is more intuitive and, to a certain degree, effective way of raising the bar in the online security. However, it becomes increasingly inefficient as fraudsters get more sophisticated in hiding their identity. Indeed, only lazy or grossly uneducated fraudsters do not delete their cookies (historically, number one way of identifying a device) today. Adobe’s FSO – which succeeded the cookie – is next to fall. Soon the larger fraudster community will discover the beauty of sandboxing. In essence, it’s a matter of appropriate tools being developed and available on the “black market” – average fraudster doesn’t even have to know all the gory details to use them. Thus, as I mentioned in my previous post, device fingerprinting is pretty much doomed.

By contrast, the “known good” strategy is increasingly getting traction in the online businesses. Initially unpopular since they introduce another hoop for the legitimate users to jump through (businesses hate that), it just by definition works much better. Fraudsters now need to get an access to the victim’s email account, cellphone, or hack the computer to get around it (it should also be mentioned that on a conceptual level the superiority of whitelisting over blacklisting is apparent in many other cases – such as in keeping user input under control).

The switch to “known good” is not a painless exercise and, yes, it introduces an additional hurdle to the business, but it may prove to be the cheapest way of putting a dent on losses by making account takeovers much more difficult to hide. Both in terms of nuisance to the users and the cost it fares much better that some extra measures I see on many websites – such as selecting an image, asking additional questions etc. – thus my take is that the popularity of “known good” approach will continue to rise.

Device fingerprinting to fight fraudsters? Please…

“Machine/device fingerprinting” technologies allow collecting and recording unique traces of individual devices. This technique has been primarily used for tracking bad guys and making it difficult for them to repeatedly use the same device for nefarious purposes. Typically a client-side script is used to collect information (“fingerprint”) of the device which is subsequently stored on the server side. Today several vendors exist on the market offering various patented ways of collecting the device data (including the internal clock, screen parameters, OS data etc.). Recently announced and hyped “evercookie” is an example of an open source code offering even more innovative ways of doing the same. Alas, while sophistication of these techniques is impressive, it doesn’t take equal sophistication for the fraudsters to neutralize (or neuter – if you will) these measures to completely circumvent device identification. Using a virtual machine (or a simple sandboxie), not to mention completely avoiding the usage of browsers in mounting cyber attacks, is a sufficient antidote to the pains companies go to “fingerprint” fraudsters’ devices. Indeed, it’s a matter of time for the fraudster community to fully adapt to the “fingerprinting” technologies…

Having said that, device “fingerprinting” is far from being dead – it is definitely finding second – perhaps more significant – life in growing trends for “average” user tracking – e.g. serving advertisement industry. Here an average netizen – being far less sophisticated than an average fraudster – is pretty much powerless against them (unless tracking is made illegal by law). Device fingerprinting will not lose its value, it’s just IMHO the days of it as a way of fighting fraud are numbered.

Online Identity services – an emerging new business model?

Every time I visit one of financial institutions’ websites I happen to be client of, I am daunted by the hops I need to go through (neither of which is really unstoppable from the fraudsters standpoint) to login to my account. It’s obvious that serious businesses are trying to counter account takeovers and each is doing that in its own way – possibly spending lots of money on something which is not its core expertise. Countered by fraudsters for whom it actually is the core expertise, these businesses seem to be doomed to continue investing lots of resources on online identity management with only a modest success.

Needless to say, the online identity is becoming a big issue. Little wonder – whole chunks of our daily life – including very personal fields like romance and friendship – is being absorbed by the Net. In all the mess one thing stands out – acute need of a better identification. A need which itself may warrant a separate industry – call it online identity services. I do not mean anything ominous (“I’ve got lightly used identity of Judd Law! Anybody?”) – just satisfying a legitimate need of identifying people online – like an online bank needing to make sure person logging into their website is the actual account holder. Today it’s moving from traditional password-based identification (see my earlier post) to more sophisticated multi-layered mechanisms (some less efficient than others) – pictures, personal questions, 2FA tools etc. It is becoming more costly to develop and maintain, hence it would make a lot of sense to delegate this headache to a company which actually specializes in online identification. In that case the bank just needs to redirect the login to the company’s page (for non-technical user that could be quite seamless, e.g. by putting the bank’s logo to the site it redirects to or do it in a iframe), let it do all the dirty work, and return the user to the bank’s page with full guarantee (covered by the third party) that the user is authenticated. Just like PayPal handles all the payment and gets back to the merchant with guaranteed payment, the ‘identity merchant’ would come back with ‘successful login’. Now, the ‘services’ may charge per login or per month or per user – details will depend on particular business model. Such services may even offer multiple types of support – the spectrum would include periodic user screening (e.g. verifying the phone), sending 2FA tokens, sending SMS-es, in short focus on linking the physical identity with cyber one.

Now, I am not saying this has never occurred to anybody else – the Open ID concept is similar one. Too bad it didn’t really take off. My take is – people who care about this most (online banks, for example) are inherently distrustful to anything free or open source. And that serious identity management needs serious resources – to screen, to support 2FA tokens etc. Microsoft passport probably was ahead of its time. PayPal could use its clout to add “identity management” to its portfolio, or better yet Facebook could do that (the model of your identity being vetted by your friends is quite powerful), too. However, either of these companies have their hands in many jars, and the last thing a bank wants is to divulge its user base to some 3rd party who can turn out to be a competitor. My take is – in order to succeed, these services should be very specific – commercial, stand-alone, not engaged in any other type of business, but solely focused on online identity and committed by binding agreements to not to use the information for any other purposes. Naturally, there needs to be safeguards that each client’s (bank’s) user data is secure and stays its property even if login is supported by the third-party.

Perhaps there are such companies, I admit I didn’t do much research here, but even if there are – it’s anything but a mature industry. I wonder if it will ever become one.