ESET’s Jake Moore used sensible glasses, deepfakes and face swaps to ‘hack’ widely-used facial recognition techniques – and he’ll demo all of it at RSAC 2026
13 Mar 2026
•
,
2 min. learn

Facial recognition is more and more embedded in every part from airport boarding gates to financial institution onboarding flows. The widely-held assumption is {that a} face is difficult to pretend and that matching a stay face to a trusted supply is a dependable identification sign.
Jake Moore, ESET International Cybersecurity Advisor, just lately put this assumption by a number of sensible stress checks. His experiments confirmed that the highly effective expertise can really be each misused and defeated.
In a single take a look at, Jake used a pair of modified off-the-shelf sensible glasses that may determine individuals in actual time. He walked by a public area, captured individuals’s faces and in contrast them towards publicly obtainable on-line knowledge sources, with identification matches returned inside seconds. The names and social media profiles had been pulled from nothing greater than individuals’s glances.
This capability may turn out to be useful if, say, a convention attendee struggles to recollect individuals’s names, but it surely’s far much less palatable when you think about what somebody with sick intentions might do with that info.
The second demo had a unique spin. It went after monetary providers, turning a fraud prevention system towards itself. Utilizing AI-generated photographs and freely obtainable software program, Jake created a fictitious face to open an precise checking account. The financial institution’s facial recognition and eKYC (know your buyer) platform accepted it as a real individual.
After proving the purpose, Jake closed the account and shared all info with the financial institution, which has since shut down that particular methodology of identification abuse. However one broader query stays: what number of monetary establishments should still be vulnerable to this sort of assault?
Lastly, Jake added himself to a facial recognition watchlist at a busy prepare station in London. He then walked by the monitored space whereas operating real-time face swap software program that overlaid Tom Cruise’s likeness onto Jake’s personal within the digicam feed. The system, which can also be utilized by the UK police, by no means acknowledged or flagged him. It was as if he merely wasn’t there and anybody actively looking for him on CCTV would have seen the actor as an alternative.
There’s much more to those experiments than we are able to cowl right here – they’re all a part of Jake’s discuss at RSAC 2026, which is due in San Francisco from March 23rd-26th, 2026. When you’re on the convention, contemplate attending the discuss – in spite of everything, seeing this all work towards an in-production system in a stay setting is completely different from ‘simply’ studying about it. To be taught extra, together with about different ESET talks on the convention, go to this web site.
The massive image
Facial recognition techniques are being deployed with implicit belief that does not match their precise resilience when somebody tries to interrupt them – even the place they solely use off-the-shelf client {hardware} and simply obtainable software program, identical to Jake did. Id verification that’s solely depending on a face match clearly carries extra threat than most individuals and organizations notice.
The experiments additionally ship a message to distributors of facial recognition techniques and anybody accountable for identification verification techniques. Amongst different issues, the techniques must be examined in assault simulation settings and underneath different adversarial circumstances. The expertise behind facial recognition is fragile in ways in which matter when somebody makes an attempt to subvert it.








