AI guarantees to radically remodel companies and governments, and its tantalizing potential is driving huge funding exercise. Alphabet, Amazon, Meta and Microsoft dedicated to spending greater than $300 billion mixed in 2025 on AI infrastructure and improvement, a 46% improve over the earlier yr. Many extra organizations throughout industries are additionally investing closely in AI.
Enterprises aren’t the one ones seeking to AI for his or her subsequent income alternative, nevertheless. Whilst companies race to develop proprietary AI techniques, risk actors are already discovering methods to steal them and the delicate information they course of. Analysis suggests an absence of preparedness on the defensive aspect. A 2024 survey of 150 IT professionals printed by AI safety vendor Hidden Layer discovered that whereas 97% mentioned their organizations are prioritizing AI safety, simply 20% are planning and testing for mannequin theft.
What AI mannequin theft is and why it issues
An AI mannequin is computing software program educated on a knowledge set to acknowledge relationships and patterns amongst new inputs and assess that info to attract conclusions or take motion. As foundational parts of AI techniques, AI fashions use algorithms to make choices and set duties in movement with out human instruction.
As a result of proprietary AI fashions are costly and time-consuming to create and practice, probably the most critical threats organizations face is theft of the fashions themselves. AI mannequin theft is the unsanctioned entry, duplication or reverse-engineering of those applications. If risk actors can seize a mannequin’s parameters and structure, they will each set up a duplicate of the unique mannequin for their very own use and extract beneficial information that was used to coach the mannequin.
The attainable fallout from AI mannequin theft is important. Contemplate the next eventualities:
- Mental property loss. Proprietary AI fashions and the knowledge they course of are extremely beneficial mental property. Dropping an AI mannequin to theft may compromise an enterprise’s aggressive standing and jeopardize its long-term income outlook.
- Delicate information loss. Cybercriminals may acquire entry to any delicate or confidential information used to coach a stolen mannequin and, in flip, use that info to breach different property within the enterprise. Information theft can lead to monetary losses, broken buyer belief and regulatory fines.
- Malicious content material creation. Unhealthy actors may use a stolen AI mannequin to create malicious content material, equivalent to deepfakes, malware and phishing schemes.
- Reputational injury. A corporation that fails to guard its AI techniques and delicate information faces the opportunity of critical and long-lasting reputational injury.
AI mannequin theft assault sorts
The phrases AI mannequin theft and mannequin extraction are interchangeable. In mannequin extraction, malicious hackers use query-based assaults to systematically interrogate an AI system with prompts designed to tease out info in regards to the mannequin’s structure and parameters. If profitable, mannequin extraction assaults can create a shadow mannequin by reverse-engineering the unique. A mannequin inversion assault is a associated sort of query-based assault that particularly goals to acquire the info a corporation used to coach its proprietary AI mannequin.
A secondary sort of AI mannequin theft assault, known as mannequin republishing, includes malicious hackers making a direct copy of a publicly launched or stolen AI mannequin with out permission. They may retrain it — in some instances, to behave maliciously — to raised go well with their wants.
Of their quest to steal an AI mannequin, cybercriminals would possibly use methods equivalent to side-channel assaults that observe system exercise, together with execution time, energy consumption and sound waves, to raised perceive an AI system’s operations.
Lastly, traditional cyberthreats — equivalent to malicious insiders and exploitation of misconfigurations or unpatched software program — can not directly expose AI fashions to risk actors.
AI mannequin theft prevention and mitigation
To stop and mitigate AI mannequin theft, OWASP recommends implementing the next safety mechanisms:
- Entry management. Put stringent entry management measures in place, equivalent to MFA.
- Backups. Again up the mannequin, together with its code and coaching information, in case it’s stolen.
- Encryption. Encrypt the AI mannequin’s code, coaching information and confidential info.
- Authorized safety. Contemplate looking for patents or different official mental property protections for AI fashions, which give clear authorized recourse within the case of theft.
- Mannequin obfuscation. Obfuscate the mannequin’s code to make it troublesome for malicious hackers to reverse-engineer it utilizing query-based assaults.
- Monitoring. Monitor and audit the mannequin’s exercise to establish potential breach makes an attempt earlier than a full-fledged theft happens.
- Watermarks. Watermark AI mannequin code and coaching information to maximise the percentages of monitoring down thieves.
Amy Larsen DeCarlo has coated the IT business for greater than 30 years, as a journalist, editor and analyst. As a principal analyst at GlobalData, she covers managed safety and cloud providers.