Most stories evaluating AI fashions are based mostly on benchmarks of efficiency, however a current analysis report from Sonar takes a distinct method: grouping completely different fashions by their coding personalities and searching on the downsides of every in the case of code high quality.
The researchers studied 5 completely different LLMs utilizing the SonarQube Enterprise static evaluation engine on over 4,000 Java assignments. The LLMs reviewed have been Claude Sonnet 4, OpenCoder-8B, Llama 3.2 90B, GPT-4o, and Claude Sonnet 3.7.
They discovered that the fashions had completely different traits, reminiscent of Claude Sonnet 4 being very verbose in its outputs, producing over 3x as many traces of code as OpenCoder-8B for a similar drawback.
Primarily based on these traits, the researchers divided the 5 fashions into coding archetypes. Claude Sonnet 4 was the “senior architect,” writing subtle, advanced code, however introducing high-severity bugs. “Due to the extent of technical issue tried, there have been extra of those points,” stated Donald Fischer, a VP at Sonar.
OpenCoder-8B was the “fast prototyper” on account of it being the quickest and most concise whereas additionally probably creating technical debt, making it best for proof-of-concepts. It created the best challenge density of all of the fashions, with 32.45 points per thousand traces of code.
Llama 3.2 90B was the “unfulfilled promise,” as its scale and backing implies it ought to be a top-tier mannequin, nevertheless it solely had a cross charge of 61.47%. Moreover, 70.73% of the vulnerabilities it created have been “BLOCKER” severity, probably the most extreme kind of bug, which prevents testing from persevering with.
GPT-4o was an “environment friendly generalist,” a jack-of-all-trades that could be a frequent selection for general-purpose coding help. Its code wasn’t as verbose because the senior architect or as concise because the fast prototyper, however someplace within the center. It additionally prevented producing extreme bugs for probably the most half, however 48.15% of its bugs have been control-flow errors.
“This paints an image of a coder who appropriately grasps the principle goal however typically fumbles
the main points required to make the code strong. The code is more likely to operate for the meant situation however will likely be suffering from persistent issues that compromise high quality and reliability over time,” the report states.
Lastly, Claude 3.7 Sonnet was a “balanced predecessor.” The researchers discovered that it was a succesful developer that produced well-documented code, however nonetheless launched numerous extreme vulnerabilities.
Although the fashions did have these distinct personalities, in addition they shared related strengths and weaknesses. The frequent strengths have been that they rapidly produced syntactically appropriate code, had stable algorithmic and information construction fundamentals, and effectively translated code to completely different languages. The frequent weaknesses have been that all of them produced a excessive proportion of high-severity vulnerabilities, launched extreme bugs like useful resource leaks or API contract violations, and had an inherent bias in the direction of messy code.
“Like people, they turn into prone to delicate points within the code they generate, and so there’s this correlation between functionality and threat introduction, which I feel is amazingly human,” stated Fischer.
One other attention-grabbing discovering of the report is that newer fashions could also be extra technically succesful, however are additionally extra more likely to generate dangerous code. For instance, Claude Sonnet 4 has a 6.3% enchancment over Claude 3.7 Sonnet on benchmark cross charges, however the points it generated have been 93% extra more likely to be “BLOCKER” severity.
“For those who assume the newer mannequin is superior, give it some thought yet another time as a result of newer shouldn’t be truly superior; it’s injecting an increasing number of points,” stated Prasenjit Sarkar, options advertising and marketing supervisor at Sonar.
How reasoning modes affect GPT-5
The researchers adopted up their report this week with new information on GPT-5 and the way the 4 obtainable reasoning modes—minimal, low, medium, and excessive—affect efficiency, safety, and code high quality.
They discovered that growing reasoning has a diminishing return on useful efficiency. Bumping up from minimal to low leads to the mannequin’s cross charge rising from 75% to 80%, however medium and excessive solely had a cross charge of 81.96% and 81.68%, respectively.
By way of safety, excessive and low reasoning modes remove frequent assaults like path-traversal and injection, however substitute them with harder-to-detect flaws, like insufficient I/O error-handling. The low reasoning mode had the best proportion of that challenge at 51%, adopted by excessive (44%), medium (36%), and minimal (30%).
“Now we have seen the path-traversal and injection turn into zero p.c,” stated Sarkar. “We will see that they’re making an attempt to resolve one sector, and what’s taking place is that whereas they’re making an attempt to resolve code high quality, they’re someplace doing this trade-off. Insufficient I/O error-handling is one other drawback that has skyrocketed. For those who have a look at 4o, it has gone to 15-20% extra within the newer mannequin.”
There was the same sample with bugs, with control-flow errors lowering past minimal reasoning, however superior bugs like concurrency / threading growing alongside the reasoning issue.
“The trade-offs are the important thing factor right here,” stated Fischer. “It’s not as simple as to say, which is the most effective mannequin? The way in which this has been seen within the horse race between completely different fashions is which of them full probably the most variety of options on the SWE-bench benchmark. As we’ve demonstrated, the fashions that may do extra, that push the boundaries, in addition they introduce extra safety vulnerabilities, they introduce extra maintainability points.”