The Communication Cube


@Djoga'Ro they call them hallucinations but a more accurate term is bullshitting. To probe the breadth of knowledge I'll ask them to list the albums of a semi-obscure bands. All the AI's except chatGPT often reply with completely incorrect titles. Do they know they are bullshitting? Presumably the titles are the most probable but also lower probability. The AI's should say when they are guessing. This problem might go away with larger models. Seems like AI + search engine might be a good approach so the model doesn't have to have all the data, can be more up to date. This is what m$ is doing with bing. m$ is trying to steal googles search thunder. I think they've got a chance. And you're right, things can get much better.
 
Do they know they are bullshitting?
Don't think. Doesn't feel like it.
I've seen an interview with an Indian scientist (Math or CS, I guess. I'd love to state his name, but don't remember). He explained how LLMs were inherently unable to check themselves. Dunno what it's worth, but it seemed plausible to me.
 
Back
Top