A pointy rise in AI-assisted software program improvement is driving unprecedented will increase in open supply safety and licensing danger, in keeping with new analysis from Black Duck.
The corporate’s 2026 Open Supply Safety and Danger Evaluation (OSSRA) report reveals that vulnerabilities in business software program codebases have greater than doubled year-on-year, highlighting rising issues that organisations are producing software program sooner than they will successfully safe or govern it.
Based mostly on an evaluation of 947 business codebases spanning 17 industries, the report paints an image of a quickly increasing software program assault floor pushed by the widespread adoption of open supply parts and AI-generated code.
Vulnerabilities attain historic ranges
The analysis discovered that the common variety of open supply vulnerabilities per software elevated by 107%, reaching 581 per codebase, whereas 87% of audited functions contained no less than one identified vulnerability. Open supply software program is now successfully common, showing in 98% of codebases, that means nearly each trendy software inherits third-party danger.
On the similar time, software program complexity continues to develop quickly. Open supply part counts rose by 30% year-on-year, whereas the variety of recordsdata in codebases elevated by 74%, additional complicating visibility and danger administration efforts. Based on Black Duck, the growing use of AI fashions in improvement environments is introducing a brand new and largely unregulated assault floor.
AI is accelerating authorized and compliance publicity
Past safety issues, the report highlights mounting authorized and licensing dangers linked to AI-generated code. Two-thirds (68%) of audited codebases have been discovered to include open supply licence conflicts, the best price recorded within the historical past of the OSSRA research, and a big improve from the earlier 12 months.
AI coding instruments could reproduce code ruled by restrictive licences, creating mental property uncertainty and compliance challenges for organisations working underneath rising laws such because the EU Cyber Resilience Act.
Regardless of widespread adoption of AI improvement instruments, governance practices seem to lag behind utilization. Whereas most organisations assess AI-generated code for safety dangers, solely 24% conduct complete opinions that cowl safety, licensing, mental property, and high quality concerns.
Visibility hole creates rising danger
The report additionally identifies a widening visibility hole throughout trendy software program provide chains. Round 17% of open supply parts enter functions outdoors commonplace bundle managers, together with copied code snippets, vendor dependencies, binaries, and AI-generated outputs, making them troublesome to detect with conventional scanning strategies.
Jason Schmitt, CEO of Black Duck, stated the findings show how basically AI has reshaped software program danger.
“AI has basically modified the economics of software program improvement — and with it, the economics of software program danger. The tempo at which software program is created now exceeds the tempo at which most organisations can safe it,” he stated .
Governance struggles to maintain tempo
The OSSRA report concludes that organisations should modernise software program provide chain governance to keep up visibility into open supply parts and embedded AI fashions. With out improved stock monitoring, software program payments of supplies (SBOMs), and AI governance insurance policies, enterprises could battle to satisfy rising regulatory expectations and buyer calls for for transparency. As AI-assisted improvement turns into inseparable from trendy software program creation, the analysis means that visibility, slightly than velocity alone, will more and more decide organisational resilience and belief.







