Learn about the challenges and opportunities presented by generative artificial intelligence.
Gen AI Insights
September 17, 2024
Generative AI and coding: Time to rethink software development
Despite serious dangers, the efficiency benefits of using generative AI tools for programming are all-but-impossible to resist. We need an entirely new human-in-the-loop approach to software management.
Itâs well documented that software development efforts that incorporate generative AI include mistakes that are radically different than what any human programmer would ever make. And yet, most enterprise plans for remediating AI coding mistakes rely on simply inserting experienced human programmers in the loop. Cue train wreck.
Experienced human programmers know intuitively the kinds of mistakes and shortcuts human programmers make. But they need to be trained to look for the kinds of mistakes that arise when software creates software.
As a practical matter, the only safe and remotely viable approach is to train programming managers to understand the nature of generative AI coding errors. In fact, given that the nature of AI coding errors is so vastly different, it might be better to train new people to manage AI coding efforts â people who are not already steeped in finding human coding mistakes.
Part of the problem is human nature. People tend to magnify and misinterpret differences. If managers see an entity â be it human or AI â making mistakes those managers themselves would never do, they tend to assume the entity is inferior to the manager on coding issues.
But consider that assumption in light of autonomous vehicles. Statistically, those vehicles are light years safer than human-operated cars. The automated systems are never tired, never drunk, never deliberately reckless.
But automated vehicles are not perfect. And the kinds of mistakes they make â such as smashing full-speed into a truck stopped for traffic â prompt humans to argue, âI never would have done something so stupid. I donât trust them.â (The Waymo parked car disaster is a must-see video.)
But just because automated vehicles make weird mistakes doesnât mean theyâre less safe than human drivers. But human nature canât reconcile those differences.
Itâs the same situation with managing coding. Generative AI coding models can be quite efficient, but when they go off the reservation, they go way off.
CONTENT FROM OUR SPONSOR
Sponsored by NICE: Behind the Scenes with AI: Actionable CX AI Insights
Whether youâre just starting your AI journey or taking it to the next level, this video series is your go-to resource, packed with actionable guidance and expert advice to get ahead of the CX AI curve. Unlock the full potential of AI for your customer experience (CX). Learn more.
Insane alien programmers
Dev Nag, CEO of SaaS firm QueryPal, has been working with generative AI coding efforts and feels many enterprise IT executives are not prepared for how different the new technology is.
âIt made tons of weird mistakes, like an alien from another planet,â Nag said. âThe code misbehaves in a way that human developers donât do. Itâs like an alien intelligence that does not think like we do, and it goes in weird directions. AI will find a pathological way to game the system.â
âFor example, you can ask these LLMs [large language models] to create code and they sometimes make up a framework, or an imaginary library or module, to do what you want it to do,â Taulli said. (He explained that the LLMs were not actually creating a new framework as much as pretending to do so.)
Thatâs not something a human programmer would even consider doing, Taulli noted, âunless (the human coder) is insane, they are not going to make up, create out of thin air, an imaginary library or module.â
When that happens, it can be easy to detect â if someone looks for it. âIf I try to pip install it, you can find that thereâs nothing there. If it hallucinates, the IDE and compiler give you an error,â Taulli said.
The idea of turning over full coding of an application â including creative control of the executable â to a system that periodically hallucinates seems to me a dreadful approach.
A much better way to leverage the efficiency of generative AI coding is by using it as a tool to help programmers get more done. Taking humans out of the loop, as AWSâs Garman suggested might happen, would be suicidal.
What if a generative AI coding tool lets its mind wander and creates some back doors so it can later do fixes without having to bother a human â back doors that attackers could also use?
Enterprises tend to be quite effective at testing apps â especially homegrown apps â for functionality, to make sure the app does what it is supposed to. Where app testing tends to fall apart is when checking on whether it can do anything that it should not do. That would be a penetration testing mentality.
But in a generative AI coding reality, that pen testing approach has to become the default. It also needs to be managed by supervisors well schooled in the wacky world of generative AI mistakes.
Enterprise IT is certainly looking at a more efficient coding future, with programmers assuming more strategic roles where they focus more on what the apps should do and why and devote less time to laboriously coding every line.
But that efficiency and those strategic gains will come at a hefty price: paying for better and differently-trained humans to make sure AI-generated code stays on track.
About the Author: Evan Schuman has covered IT issues for a lot longer than he'll ever admit. The founding editor of retail technology site StorefrontBacktalk, he's been a columnist for CBSNews.com, RetailWeek, Computerworld and eWeek and his byline has appeared in titles ranging from BusinessWeek, VentureBeat and Fortune to The New York Times, USA Today, Reuters, The Philadelphia Inquirer, The Baltimore Sun, The Detroit News and The Atlanta Journal-Constitution.