Well, to be honest, I haven't read the paper, just the news article, and I don't believe blindly in this kind of research, it's better than nothing, but measuring people is difficult and as the proverb goes:
There's lies, damned lies and statistics (I read an extension in the slashdot.org footer recently, but I forgot).
I think experienced programmers write advanced code, so they'll not always be helped by AI generating code.
While inexperienced programmers might get something working without much experience. That will probably still be bad code btw.
The problem is before LLM one of the problems with software was already low quality code being abundant because those who created it got paid less and those who suffered it didn't have a way to transfer their loses to the authors or their employers.
And most users didn't care because they also never cared to think what they really wanted (maybe because once they thought it they didn't have much of a way to get it).
So the market was already prioritizing human made low quality code. At some point in the evolution, quality was so low that humans were no longer required.
I recently read BYD is releasing a self driving feature just for parking. But they're advertising they take any insurance cost for problems that it may create.
There's new legislation about software security and quality. Are any coding LLMs giving similar warranties ?
So, I think AI can speed up you work if you only use it when you know it will actually be beneficial for the problem you're trying to solve.
And the study sounds like it recorded people trying to find out what works without knowing it yet. AI won't always help, especially if it's bad code you need to refactor.
That sounds good if you could be sure about it. But the problem with LLM is that there is never a formal or clear enough spec to even be able to tell
if the output matches the spec or not. So it's impossible to even tell if LLMs do a good job or not at anything. It's all fuzzy satisfaction and hand weaving.
You just seem to be identifying problems that are simple enough that a vague spec may be enough, the time to do it manually is long enough,
the problem is similar enough to appear vastly in the training set and the verification can be done by compiler or some such, but then the old style
search and replace on steroids should at least give you confidence on what will be changed, while LLMs can still always surprise you,
so I don't see much benefit. But don't listen to me, I haven't tried them (for personal reasons, not particulary relevant)
Also, you'll have things like the Jevons Paradox (
https://en.wikipedia.org/wiki/Jevons_paradox ) and the rebound effect (
https://www.sciencedirect.com/topics/neuroscience/rebound-effect ).
Where AI can lower the bar of competence, having more incompetent people entering the programmers workforce. Making programmers cheaper but also worse overall, but also AI improves their capabilities (mostly the worsts ones do better with AI compared to without).
In the end resulting in good programmers leaving the workforce to do other jobs, like training AI to improve its programming skills. Making it debatable if AI helps or not.
The problem is that it'll never be debatable, because the way AI is used, its selling point is that the user doesn't have to think what it wants, So nobody can tell how good the AI does,
except for the most obvious hallucinations that go so off the mark that you can tell it's not what was wanted before you even precise what was wanted.
Those hallucinations are currently so frequent that it might seem they're all the problems. But they're not. There's bound to be more problems that
are not in the output code but in the lack of spec.
One just writes some ambiguous prompt and the AI produces something that may resemble in some ways what the prompt said and in some ways what
the training material provided. The AI didn't think if the output is what is needed because: a) AIs don't think and b) AIs don't need anything, and those needing it don't spend time telling the AI.
If you care enough to understand the needs then you have most of the coding effort done. The formal language syntax is not a difficulty, it just helps specify concrete behaviour.
And understanding the problem doesn't have to imply a waterfall style formal requirements document, it can be done while programming and testing.
Having someone else program for you (human or bot) stoops you learning about your problem. If your problem is generic enough, or you're new to that problem,
and the humans doing your programming for you know more about it, it might even be good to use other people's programs.
But AIs don't know about programs, don't live human lives, and don't care, so when you enroll a LLM model to do your programming for you
you miss on your learning about the program, and nobody else does it. It's like using an LLM to do your homework, you may pass the deadline but
you don't learn so much, so the disadvantages will come back to bite you. But in the future.
In free software there's a review process where people should look and try to understand the code, so the submitters usually have to understand the code they submit.
The LLM doesn't help understand the code, and programming it yourself does, so it's not surprising that the loss of efficiency is more clear than in
other jobs where lack of understanding of the code is not immediately required.
In the end resulting in good programmers leaving the workforce to do other jobs.
Which is what the marketing/business staff always wanted so they stop complaining about the problems. So quality can finally sink to the bottom and profits grow till the company goes bust and they start something else again.
That is an LLM accelerated process, but nothing that wasn't being done manually before...