Gitanjali Venkatraman does fantastic illustrations of advanced topics (which is why I used to be so blissful to work together with her on our Professional Generalists article). She has now revealed the newest in her collection of illustrated guides: tackling the advanced matter of Mainframe Modernization
In it she illustrates the historical past and worth of mainframes, why modernization is so tough, and how one can sort out the issue by breaking it down into tractable items. I like the readability of her explanations, and smile ceaselessly at her approach of enhancing her phrases together with her quirky photos.
❄ ❄ ❄ ❄ ❄
Gergely Orosz on social media
Unpopular opinion:
Present code overview instruments simply don’t make a lot sense for AI-generated code
When reviewing code I actually need to know:
- The immediate made by the dev
- What corrections the opposite dev made to the code
- Clear marking of code AI-generated not modified by a human
Some folks pushed again saying they don’t (and shouldn’t care) whether or not it was written by a human, generated by an LLM, or copy-pasted from Stack Overflow.
For my part it issues loads – due to the second very important goal of code overview.
When requested why do code opinions, most individuals will reply the primary very important goal – high quality management. We need to guarantee unhealthy code will get blocked earlier than it hits mainline. We do that to keep away from bugs and to keep away from different high quality points, particularly comprehensibility and ease of change.
However I hear the second very important goal much less usually: code overview is a mechanism to speak and educate. If I’m submitting some sub-standard code, and it will get rejected, I need to know why in order that I can enhance my programming. Perhaps I’m unaware of some library options, or possibly there’s some project-specific requirements I haven’t run into but, or possibly my naming isn’t as clear as I believed it was. Regardless of the causes, I have to know to be able to be taught. And my employer wants me to be taught, so I might be simpler.
We have to know the author of the code we overview each so we are able to talk our higher apply to them, but in addition to know how one can enhance issues. With a human, its a dialog, and maybe some documentation if we understand we’ve wanted to clarify issues repeatedly. However with an LLM it’s about how one can modify its context, in addition to people studying how one can higher drive the LLM.
❄ ❄ ❄ ❄ ❄
Questioning why I’ve been making numerous posts like this just lately? I clarify why I’ve been reviving the hyperlink weblog.
❄ ❄ ❄ ❄ ❄
Simon Willison describes how he makes use of LLMs to construct disposable however helpful internet apps
These are the traits I’ve discovered to be best in constructing instruments of this nature:
- A single file: inline JavaScript and CSS in a single HTML file means the least problem in internet hosting or distributing them, and crucially means you may copy and paste them out of an LLM response.
- Keep away from React, or something with a construct step. The issue with React is that JSX requires a construct step, which makes the whole lot massively much less handy. I immediate “no react” and skip that complete rabbit gap solely.
- Load dependencies from a CDN. The less dependencies the higher, but when there’s a well-known library that helps resolve an issue I’m blissful to load it from CDNjs or jsdelivr or related.
- Hold them small. Just a few hundred strains means the maintainability of the code doesn’t matter an excessive amount of: any good LLM can learn them and perceive what they’re doing, and rewriting them from scratch with assist from an LLM takes just some minutes.
His repository contains all these instruments, along with transcripts of the chats that received the LLMs to construct them.
❄ ❄ ❄ ❄ ❄
Obie Fernandez: whereas many engineers are underwhelmed by AI instruments, some senior engineers are discovering them actually useful. He feels that senior engineers have an oft-unspoken mindset, which along with an LLM, permits the LLM to be rather more useful.
Ranges of abstraction and generalization issues get talked about loads as a result of they’re straightforward to call. However they’re removed from the entire story.
Different instruments present up simply as usually in actual work:
- A way for blast radius. Figuring out which adjustments are secure to make loudly and which needs to be quiet and contained.
- A really feel for sequencing. Figuring out when a technically appropriate change continues to be fallacious as a result of the system or the workforce isn’t prepared for it but.
- An intuition for reversibility. Preferring strikes that maintain choices open, even when they appear much less elegant within the second.
- An consciousness of social price. Recognizing when a intelligent answer will confuse extra folks than it helps.
- An allergy to false confidence. Recognizing locations the place checks are inexperienced however the mannequin is fallacious.
❄ ❄ ❄ ❄ ❄
Emil Stenström constructed an HTML5 parser in python utilizing coding brokers, utilizing Github Copilot in Agent mode with Claude Sonnet 3.7. He mechanically authorized most instructions. It took him “a few months on off-hours”, together with no less than one restart from scratch. The parser now passes all of the checks in html5lib take a look at suite.
After writing the parser, I nonetheless don’t know HTML5 correctly. The agent wrote it for me. I guided it when it got here to API design and corrected unhealthy choices on the excessive stage, however it did ALL of the gruntwork and wrote the entire code.
I dealt with all git commits myself, reviewing code because it went in. I didn’t perceive all of the algorithmic decisions, however I understood when it didn’t do the suitable factor.
Though he offers an summary of what occurs, there’s not very a lot data on his workflow and the way he interacted with the LLM. There’s actually not sufficient element right here to attempt to replicate his strategy. That is distinction to Simon Willison (above) who has detailed hyperlinks to his chat transcripts – though they’re much smaller instruments and I haven’t checked out them correctly to see how helpful they’re.
One factor that’s clear, nonetheless, is the very important want for a complete take a look at suite. A lot of his work is pushed by having that suite as a transparent information for him and the LLM brokers.
JustHTML is about 3,000 strains of Python with 8,500+ checks passing. I couldn’t have written it this shortly with out the agent.
However “shortly” doesn’t imply “with out considering.” I spent numerous time reviewing code, making design choices, and steering the agent in the suitable route. The agent did the typing; I did the considering.
❄ ❄
Then Simon Willison ported the library to JavaScript:
Time elapsed from challenge thought to completed library: about 4 hours, throughout which I additionally purchased and adorned a Christmas tree with household and watched the newest Knives Out film.
One in every of his classes:
When you can scale back an issue to a strong take a look at suite you may set a coding agent loop unfastened on it with a excessive diploma of confidence that it’ll finally succeed. I known as this designing the agentic loop a couple of months in the past. I believe it’s the important thing ability to unlocking the potential of LLMs for advanced duties.
Our expertise at Thoughtworks backs this up. We’ve been doing a good bit of labor just lately in legacy modernization (mainframe and in any other case) utilizing AI emigrate substantial software program techniques. Having a strong take a look at suite is important (however not enough) to creating this work. I hope to share my colleagues’ experiences on this within the coming months.
However earlier than I go away Willison’s publish, I ought to spotlight his last open questions on the legalities, ethics, and effectiveness of all this – they’re well-worth considering.







