The Communication Cube


Here's a recent example to show that Peony can give as good as she gets.

Me: *Eating a lamb kofta kebab I bought from a local takeaway*
P: "You know, I could never have sex with a zombie,I'd be too worried that their dick would snap off inside me."
Me: *Puts kebab in bin*
 
Well, I've spent some time looking for the post but I can't find it now.

We once talked about large language models and fre software coder productivity, or it's impact in free software.

For what it's worth, there seems to have been an study that finds free software programmers work 20% slower with LLMs.
 
Well, I've spent some time looking for the post but I can't find it now.

We once talked about large language models and fre software coder productivity, or it's impact in free software.

For what it's worth, there seems to have been an study that finds free software programmers work 20% slower with LLMs.
Study:
Overall, the developers in the study accepted less than 44 percent of the code generated by AI without modification.
I think experienced programmers write advanced code, so they'll not always be helped by AI generating code.
While inexperienced programmers might get something working without much experience. That will probably still be bad code btw.

In my experience, AI has certain uses that it can do well. Like an advanced find & replace feature, transforming one format to another.
But when you ask to generate a function with code inside, it will probably imagine how that would look without actually generating working code.

So, I think AI can speed up you work if you only use it when you know it will actually be beneficial for the problem you're trying to solve.
And the study sounds like it recorded people trying to find out what works without knowing it yet. AI won't always help, especially if it's bad code you need to refactor.

Also, you'll have things like the Jevons Paradox ( https://en.wikipedia.org/wiki/Jevons_paradox ) and the rebound effect ( https://www.sciencedirect.com/topics/neuroscience/rebound-effect ).
Where AI can lower the bar of competence, having more incompetent people entering the programmers workforce. Making programmers cheaper but also worse overall, but also AI improves their capabilities (mostly the worsts ones do better with AI compared to without).
In the end resulting in good programmers leaving the workforce to do other jobs, like training AI to improve its programming skills. Making it debatable if AI helps or not.
 

Well, to be honest, I haven't read the paper, just the news article, and I don't believe blindly in this kind of research, it's better than nothing, but measuring people is difficult and as the proverb goes:
There's lies, damned lies and statistics (I read an extension in the slashdot.org footer recently, but I forgot).


I think experienced programmers write advanced code, so they'll not always be helped by AI generating code.
While inexperienced programmers might get something working without much experience. That will probably still be bad code btw.
The problem is before LLM one of the problems with software was already low quality code being abundant because those who created it got paid less and those who suffered it didn't have a way to transfer their loses to the authors or their employers.
And most users didn't care because they also never cared to think what they really wanted (maybe because once they thought it they didn't have much of a way to get it).
So the market was already prioritizing human made low quality code. At some point in the evolution, quality was so low that humans were no longer required.

I recently read BYD is releasing a self driving feature just for parking. But they're advertising they take any insurance cost for problems that it may create.
There's new legislation about software security and quality. Are any coding LLMs giving similar warranties ?
So, I think AI can speed up you work if you only use it when you know it will actually be beneficial for the problem you're trying to solve.
And the study sounds like it recorded people trying to find out what works without knowing it yet. AI won't always help, especially if it's bad code you need to refactor.
That sounds good if you could be sure about it. But the problem with LLM is that there is never a formal or clear enough spec to even be able to tell
if the output matches the spec or not. So it's impossible to even tell if LLMs do a good job or not at anything. It's all fuzzy satisfaction and hand weaving.
You just seem to be identifying problems that are simple enough that a vague spec may be enough, the time to do it manually is long enough,
the problem is similar enough to appear vastly in the training set and the verification can be done by compiler or some such, but then the old style
search and replace on steroids should at least give you confidence on what will be changed, while LLMs can still always surprise you,
so I don't see much benefit. But don't listen to me, I haven't tried them (for personal reasons, not particulary relevant)
Also, you'll have things like the Jevons Paradox ( https://en.wikipedia.org/wiki/Jevons_paradox ) and the rebound effect ( https://www.sciencedirect.com/topics/neuroscience/rebound-effect ).
Where AI can lower the bar of competence, having more incompetent people entering the programmers workforce. Making programmers cheaper but also worse overall, but also AI improves their capabilities (mostly the worsts ones do better with AI compared to without).
In the end resulting in good programmers leaving the workforce to do other jobs, like training AI to improve its programming skills. Making it debatable if AI helps or not.
The problem is that it'll never be debatable, because the way AI is used, its selling point is that the user doesn't have to think what it wants, So nobody can tell how good the AI does,
except for the most obvious hallucinations that go so off the mark that you can tell it's not what was wanted before you even precise what was wanted.
Those hallucinations are currently so frequent that it might seem they're all the problems. But they're not. There's bound to be more problems that
are not in the output code but in the lack of spec.

One just writes some ambiguous prompt and the AI produces something that may resemble in some ways what the prompt said and in some ways what
the training material provided. The AI didn't think if the output is what is needed because: a) AIs don't think and b) AIs don't need anything, and those needing it don't spend time telling the AI.
If you care enough to understand the needs then you have most of the coding effort done. The formal language syntax is not a difficulty, it just helps specify concrete behaviour.
And understanding the problem doesn't have to imply a waterfall style formal requirements document, it can be done while programming and testing.
Having someone else program for you (human or bot) stoops you learning about your problem. If your problem is generic enough, or you're new to that problem,
and the humans doing your programming for you know more about it, it might even be good to use other people's programs.
But AIs don't know about programs, don't live human lives, and don't care, so when you enroll a LLM model to do your programming for you
you miss on your learning about the program, and nobody else does it. It's like using an LLM to do your homework, you may pass the deadline but
you don't learn so much, so the disadvantages will come back to bite you. But in the future.

In free software there's a review process where people should look and try to understand the code, so the submitters usually have to understand the code they submit.
The LLM doesn't help understand the code, and programming it yourself does, so it's not surprising that the loss of efficiency is more clear than in
other jobs where lack of understanding of the code is not immediately required.
In the end resulting in good programmers leaving the workforce to do other jobs.
Which is what the marketing/business staff always wanted so they stop complaining about the problems. So quality can finally sink to the bottom and profits grow till the company goes bust and they start something else again.
That is an LLM accelerated process, but nothing that wasn't being done manually before...
 
The problem with LLM written code is that you can't trust it. It will hallucinate dependencies (with security risks) and is prone to writing unsecure code. So you need a human to review it. And code review is difficult, and not very interesting. It's like having a self-driving car but having to stay alert and ready to take control because it can randomly fail at any moment. You'd probably rather drive.

You actually want it the other way around: you drive and the car warns you if you miss something. And my limited experience with LLM-assisted code review wasn't very conclusive.

There are some tasks where LLM could help (what TeDaDeS calls "Advanced Find&Replace" and extended autocomplete) but in my experience you'd be better off with a dedicated script/program/IDE extension for this. I actually had this in my last job: some of my colleagues were using LLM to write boilerplate code but I wrote a script that did the job in a far more reliable way.
 
I clearly have higher expectations from AI, but I totally agree with the statement that AI cannot gaurantee to do a good job generating code.
Code has to be exact to function properly. And AI is more doing ballpark-like work.

My gain with AI is like having someone to ask questions to, but doing the work myself. So, for example: I'm writing in a new programming language and I want to use a certain construct I used before in another language. AI could help explain how this new language deals with that problem, and maybe shows example code with the correct variable types and syntax.
I can continue the dialog to find external sources to check AI's suggestions.

I could have done the same using a search engine and reading tutorials, but AI could point me into the right direction quicker.

If you trust AI to do the proper thing regardless, you'll run into issues really quick.
 
Yes, this is actually something LLMs are decent at: summarizing something. You can have it look at some documentation and pull what you need, or at least tell you where to look.
 
The reasons why I use AI tools are as follows:

Image generators: Commissioning artists (good ones anyway) costs more than I can afford + it also takes a lot of time that I don't have (researching artists, finding one that will take your commission, waiting for the commission to be completed etc). As for why I don't become an artist myself and cut out the middleman, art has never been something that I've been good at and I don't see that changing within my lifetime.

Story generators: They help me keep Peony provided with stories (I bought her a book once which was meant to have enough stories in it to last a year, she finished it in just over three months).
 
Unfortunately software projects are usually run by managers and managers tend want one thing above all, prompter delivery. So if anything AI generate code is (several orders of magnitude) faster so I'm quite certain managers will prefer AI's over flesh & blood programmers. Hallucinations, plagiarism, creativity, quality can all be learned. It could be that humans just end up verifying behaviour - that the unit test coverage is adequate - not the actual implementation. And I think this applies to anything digital not just software. Architecture, design, advertising, music, film, etc. AI seems to be moving slowly now but I believe the disruption in 10-15 years is virtually unimaginable. AI will be the cheapest way to implement anything digital.
 
You could setup a pipeline to develop code, including testing and integration all using AI. Reducing the risk of total garbage.
Same checks can be run to recognize and refactor copyrighted code.
But, for now I think all steps could produce garbage.

The Retroid Flip2 uses the SD865 which is not much faster/slower than the N250, but much cheaper. Still it's afforable, and looks nice.
 
The Retroid Flip2 uses the SD865 which is not much faster/slower than the N250, but much cheaper. Still it's afforable, and looks nice.
Two rather different devices. Just being able to use godot on the Micro, or have it replace my Abxylute One - that I mostly use as tablet anyways and get annoyed by when I want to input text on - would be neat.
In the end it's still too small for streaming text-heavy PC titles - I'd like a 10" or larger display for that. And you're right <Edit: I think I misread you. You're saying the price of the Micro PC is affordable. Yeah compared to others in the class with gaming focus this is affordable, end of edit>, the price is nothing to sneeze at. If you only want to game.

The reasons why I use AI tools are as follows:

Image generators: Commissioning artists (good ones anyway) costs more than I can afford + it also takes a lot of time that I don't have (researching artists, finding one that will take your commission, waiting for the commission to be completed etc). As for why I don't become an artist myself and cut out the middleman, art has never been something that I've been good at and I don't see that changing within my lifetime.

Story generators: They help me keep Peony provided with stories (I bought her a book once which was meant to have enough stories in it to last a year, she finished it in just over three months).
Wouldn't AI be the middle man and you're cutting out the original creators whose works have been fed to the machine? Apart from that I don't mind AI usage too much as long as it's not being monetized or used to mislead folks.
 
Last edited:
Do not fret! 'twas about your use of "cutting out the middle man", not about morality or the inability to provide pecuniary compensation.
 
I like small laptop-like devices, especially tiny ones. The Flip2 isn't one of those, but it's fast for its small size and much cheaper.
I used to have a Compaq C-Series, not much for gaming, but I liked the size. I had both monochrome (ran of AA batteries) and color screen (used batterypack). https://en.m.wikipedia.org/wiki/Compaq_C_series
Two rather different devices. Just being able to use godot on the Micro, or have it replace my Abxylute One - that I mostly use as tablet anyways and get annoyed by when I want to input text on - would be neat.
In the end it's still too small for streaming text-heavy PC titles - I'd like a 10" or larger display for that. And you're right, the price is nothing to sneeze at. If you only want to game.


Wouldn't AI be the middle man and you're cutting out the original creators whose works have been fed to the machine? Apart from that I don't mind AI usage too much as long as it's not being monetized or used to mislead folks.
While with AI it's pretty obvious it can be traced to its original work, most of all work is inspired or (poorly) copied from others.
If AI would 'learn' how to paint, and have a basic style which resembles someone else I would mind it less. It's the ability to exactly clone work which makes it a real problem for me.
 
Back
Top