How smart is your phone if a photo of your face can bypass security settings?

Fingerprint and facial scans and two-factor authentication all too often fail to deliver on the added security that they promise

Facial scans and fingerprints make it more difficult to access someone else's smartphone – but not impossible. Photograph: iStock
Facial scans and fingerprints make it more difficult to access someone else's smartphone – but not impossible. Photograph: iStock

How secure is your smartphone? Like most people, you probably have implemented at least basic security, putting a password on it to keep your private information, confidential logins and embarrassing photos hidden from public view.

If you are really diligent, you might have implemented some form of advanced security – facial recognition, for example. The move away from passwords (sometimes simple, often reused, can be swiped if we aren’t careful) towards biometrics (fingerprints and facial scans) makes sense on the surface. It is more difficult to fake a fingerprint to bypass security checks than it is to shoulder surf and pick up log-in details that way.

But it’s not impossible; 3D printed fingerprints lifted from surfaces have been successfully used to bypass security measures by researchers. Facial scans also come with a health warning. If you have set up a phone recently, you might have seen the warning that it can sometimes be fooled, either by someone who looks similar to you or, in some cases, a photo held up to the phone.

This is something that researchers at consumer group Which? have been tracking for the past few years. The results have been less than encouraging.

A surprising number of phones have been fooled, relying on 2D facial recognition that accepts a flat photo of the person unlocking the phone and comparing it to the authorised user. A lack of depth means these systems can’t distinguish between a live person and a photo.

The good news is that if you have invested in the newest flagship smartphones, they are – by and large – better performers on facial recognition. That includes Samsung’s latest S26 range and Apple’s iPhones that use FaceID. The latter is a 3D system that uses thousands of points on the face to map it, making it far less likely to be fooled by a lookalike (perhaps identical twins excluded).

Other phone models using 3D systems have also ironed out the kinks, and Which? gave Google’s Pixel phones – the 8,9 and 10 models – a special mention due to the more secure 2D system that combines advanced machine learning to up the security standards.

But the problem is that all these phones are typically more expensive than many of the ones that were failing the security tests. So while consumers who can afford to stump up for the newest flagship devices are getting a higher standard of security, those whose budgets are more constrained may be inadvertently putting their data at risk.

Another issue raised by Which? is that, while some phones warn about the inadequacies of facial scans, not every phone manufacturer is making it clear. The idea that these systems are supposed to keep our data – and by extension us – safer has become so firmly embedded that we simply trust they will work.

As we know, it is not always the case, and not just for biometrics.

Take, for example, two-factor authentication. We are told we should use it on our accounts to add an extra layer of security. If someone does manage to crack our expert password system, we might be able to keep them out of our private data with the additional authentication requirement.

But in practice, not all two-factor authentication is equal. Some use less secure methods. SMS, for example, has long been criticised for being vulnerable to sim swapping fraud, phishing or messages being intercepted. Other methods just defy sense, such as the approval requests that often arrive on the same device that I am trying to log in on. If someone has got through the security defences to the point where they can request a code to log in, it is safe to say that getting access to a pop-up notification will present little issue.

It all feels a bit like security theatre, and it runs the risk of making us all a bit complacent. No one is immune, not even those of us who should know better.

I recently had to confess to IT that I had accidentally opened a PDF that was definitely a phishing attempt, and may or may not have contained some hidden security threat. I didn’t hand over any data or click links, but these days, you don’t have to; there are plenty of things that can hide in an innocent looking document, ready to run surreptitiously, steal data and then wipe all evidence of their presence.

As I subsequently sat through another webinar on phishing attacks, warning me not to open unknown attachments or click links that arrive in my inbox unless they are known to be a trusted source, it struck me how much of that is ignored many times a day.

If we were to follow all the security advice to the letter, we simply wouldn’t get things done. A good part of my day would be spent phoning people who have emailed me something to check that the attachment I’m about to open has indeed been sent by them. Genuine messages that seem a little suspect for whatever reason would get ignored – as they already do.

The alternative is to click at your own peril, and likely end up with mandatory cybersecurity training as a result. But it’s a full-time job staying on top of security threats, an exhausting one with little return. And with AI scams now being thrown into the mix, it feels like it is only going to get worse. A cheery thought, no?

  • Join The Irish Times on WhatsApp and stay up to date

  • Listen to the Inside Business podcast for a look at business and economics from an Irish perspective

  • Sign up to the Business Today newsletter for the latest news and commentary in your inbox