The Communication Cube


AI these days is using computers for things they are not very good at. Training a neural network is brute force - let's face it. Incidentally these are things we thought would be easy for computers a long time ago, like communicating in natural language, information retrieval, generating images, flying helicopters, coding, etc. Just revisit any old science fiction and you'll notice all these droids / tricorders / AI's seem like stupid humans with large knowledge bases.

Understanding natural language via LLM's is very impressive and a leap forward. Where do we go from here? Voice interfaces are an obvious one that I think will absolutely blow up, if only because a tiny microphone and speaker is very inexpensive compared to a keyboard or touchscreen. I also believe there's a limit to the benefits of adding more parameters to models.

Recently there was news that AI advances were stalling, especially in coding. The excuse given was not enough quality code to train them on. That's stupid. I would expect to be able to give an AI a single programming book on a new language and have them immediately be able to code in it.

 
Last edited:
AI these days is using computers for things they are not very good at. Training a neural network is brute force - let's face it. Incidentally these are things we thought would be easy for computes a long time ago, like communicating in natural language, information retrieval, generating images, flying helicopters, coding, etc. Just revisit any old science fiction and you'll notice all these droids / tricorders / AI's seem like stupid humans with large knowledge bases.
Not sure we all thought it was easy.
But we didn't think we'd cheat at the Turing test: humans have lost so much in skills, that computers have it easier to outperform humans.
Understanding natural language via LLM's is very impressive and a leap forward. Where do we go from here? Voice interfaces are an obvious one that I think will absolutely blow up, if only because a tiny microphone and speaker is very inexpensive compared to a keyboard or touchscreen. I believe there's a limit to the benefits of adding more parameters to models.
I don't know. The micro might be cheap, but the model, and the electricity to train it (and to a lesser extent, run it), are not cheap.
But the most expensive is the errors introduced by using natural language. Natural language is ambiguous, and that's an advantage for what we've used it for.
But in order to control machines you need clear controls that do well defined things in a model the user can understand, not plays on games and best efforts.
You don't have the same expectations from a machine you bought (or loaned or leased, nowadays, because it's all services) and use than from an human you collaborate with.
They have different constraints, different rights and different capacities, Natural language does not need to be equally useful.

And we haven't achieved machine natural language understanding. There are big progresses, some even useful, but it's not understanding, it's more like pattern matching.
We're easy to foul. I remember decades ago how impressive software using simple Markov chains could seem. Now it's a more nuanced and huge model, so it's more exuberant, but it's not understanding.
Recently there was news that AI advances were stalling, especially in coding. The excuse given was not enough quality code to train them on. That's stupid. I would expect to be able to give an AI a single programming book on a new language and have them immediately be able to code in it.

That's only possible if you have previously given it so many books on language, maths, computers and coding, that that last programming book is insignificant in comparison.
The programming book might be useful to you because of all you already know, to an empty model it's as useless as a programming book to you if written in a natural language you don't speak.
Or a programming book in English teaching Qalb (if you don't speak Arabic). or a book on Whitespace, INTERCAL, Brainfuck, or Ook! for a normal programmer.

So it's still brute force.

And the problem is not only the inefficiency of brute force, it's that the problem definition is waved away so that it's simply impossible to know whether a model works or doesn't. A benchmark is just a benchmark.
 
  • Like
Reactions: rSl
I am curious if AI will deliver super intelligence. Everything else is bullshit, pedestrian, automation. But would we be able to accept a super intelligence? The smartest kid in the class gets ridiculed and ostracized. Would we even be able to understand a super intelligent AI?
 
  • Like
Reactions: rSl
Intelligence doesn't exist. Let alone super intelligence. People talk of intelligence as if it was an skill of someone or a property of something.
But intelligence is more like beauty. One person may like someone and another person dislike them, or someone will enjoy a work of art that others detest.
When you say someone or something is intelligent it doesn't mean it has some high score in a fixed scale, or is able to do something extraordinary in absolute terms.
It just means you understand the value of his or her or its achievements. You can admire someone that solves a math problem you can't solve, because you know what it is about.
Or you can think an animal is intelligent if it solves a problem with some tool, because you also use tools to solve problems.

But you may not understand the intelligence of fungi or crabs or of a mimosa tree, and only a little about octopus intelligence or a bee hive intelligence, because their goals, their inputs, their umwelt, is so different to yours.
It's a little like Europeans thinking other continents' aborigines were less intelligent, just because they didn't have the same culture or technology. It wasn't lack of intelligence in the aborigines, it was lack of understanding by the Europeans. (and greed needing excuses to steal)

So any intelligence you attribute to someone or something, does not really speak of something or someone. It speaks about you.

And if you think of love in general like the sport of trying to understand someone (deeper or shallower, more narrow or more general, depending on how deep or general that kind of love is) , then when you find someone intelligent it just means you took the effort of understanding why they do what they do, and so, it just means you somehow love them a little (or a lot).

So what would super intelligence even be? Super seduction ? Supper marketing ?

I find something much harder than intelligence: agency.

If one day a machine comes by, that is super intelligent, whatever that means, it will still be someone's machine, and that someone will use it for their own goals. You may find skynet evil and fight it, but I don't think it will want or need or love or hate anything. It would do whatever it is build or instructed to do by some human (or animal, or living being). Because even if you ever think a machine has intelligence, it won't have a life and it won't have a goal of its own.

Fear the machine's owner, not the machine, because there is where malice will be.
 
Last edited:
  • Like
Reactions: rSl
I'm sleepy which usually it impacts my thinking process. But I still wrote down my thoughts and didn't read them afterwards, enjoy ;)
Intelligence doesn't exist. Let alone super intelligence. People talk of intelligence as if it was an skill of someone or a property of something.
But intelligence is more like beauty. One person may like someone and another person dislike them, or someone will enjoy a work of art that others detest.
I think I disagree with the idea that intelligence doesn't exist and it's a value we attribute to something. I think the over attribution of value to intelligence itself does cause problems.
In this case I use the traditional sense of 'intelligence', which would be to learn and apply knowledge and solve new problems. Which can be measured with an IQ test.

An IQ test can be any random set of problems which you provide to a group of people and have those people solve them.
The people that do well in such tests generally have a higher capacity to store and utilize knowledge and solve problems that they haven't encountered before. Usually a higher IQ score relates to a higher intelligence.
These kind of test could be applied to anything capable of solving problems. There can be obvious bias in which problems are selected for the group to be tested, which I think is a where the IQ testing might sometimes fail us.

Can computers have intelligence, I thinks so. Currently it looks like AI systems mimic intelligence but it lacks a learning ability. Which would hinder the storing and utilizing knowledge and solving problems it hasn't encountered before.
Computers might also lack the social part, especially if it doesn't look and act like a human. But neither do pets and many humans do enjoy interacting with those.
So I'm not sure how much the lack of agency is linked to intelligence.

At work we sometimes discuss 'Quality'. How well a product fulfills the need of the user when they perform a certain task. But the needs to those users differ, so many characteristics need to match for this purpose. For example: does it look nice, is it durable, etc.
So the purpose of a product and the needs of a user need to match. And I think intelligence is somewhat like this, as in it's a characteristic of a person. And each person solves problems differently, so the problem needs to match the persons abilities. An intelligent person just usually scores better on more problems in general. And if the test is biased, it might just perform exceptionally well in the problems which are selected. But I guess the purpose is the bias, most people encounter similar challenges, so they need to at least do those things well.

As you mention, this Moflin (see video, mostly linked because it's somewhat cute looking for a 'computer') is made by someone for a reason and it lacks agency. It mimics intelligence by using AI. But it outsources many problems to its owner which is smart.
Do I think this thing is intelligent: Nope. But if it had a way to transport itself and if it could navigate the environment freely to, for example, charge itself; I might reconsider.
You could state that this is Moflin's agency. And agency is one's independent capability or ability to act on one's will, like to charge yourself before your battery dies.
But people are limited in their agency by many factors and we usually don't consider them less intelligent due to it.
An intelligent person can be jailed for example and being jailed doesn't make that person less intelligent. You would hope the intelligent part might solve the jailed part but sometimes it's out of a persons control.
Intelligent people usually have more success in work and IQ is a good predictor for this.

 
  • Like
Reactions: rSl
Intelligence doesn't exist. Let alone super intelligence. People talk of intelligence as if it was an skill of someone or a property of something.
But intelligence is more like beauty. One person may like someone and another person dislike them, or someone will enjoy a work of art that others detest.
When you say someone or something is intelligent it doesn't mean it has some high score in a fixed scale, or is able to do something extraordinary in absolute terms.
It just means you understand the value of his or her or its achievements. You can admire someone that solves a math problem you can't solve, because you know what it is about.
Or you can think an animal is intelligent if it solves a problem with some tool, because you also use tools to solve problems.

But you may not understand the intelligence of fungi or crabs or of a mimosa tree, and only a little about octopus intelligence or a bee hive intelligence, because their goals, their inputs, their umwelt, is so different to yours.
It's a little like Europeans thinking other continents' aborigines were less intelligent, just because they didn't have the same culture or technology. It wasn't lack of intelligence in the aborigines, it was lack of understanding by the Europeans. (and greed needing excuses to steal)

So any intelligence you attribute to someone or something, does not really speak of something or someone. It speaks about you.

And if you think of love in general like the sport of trying to understand someone (deeper or shallower, more narrow or more general, depending on how deep or general that kind of love is) , then when you find someone intelligent it just means you took the effort of understanding why they do what they do, and so, it just means you somehow love them a little (or a lot).

So what would super intelligence even be? Super seduction ? Supper marketing ?

I find something much harder than intelligence: agency.

If one day a machine comes by, that is super intelligent, whatever that means, it will still be someone's machine, and that someone will use it for their own goals. You may find skynet evil and fight it, but I don't think it will want or need or love or hate anything. It would do whatever it is build or instructed to do by some human (or animal, or living being). Because even if you ever think a machine has intelligence, it won't have a life and it won't have a goal of its own.

Fear the machine's owner, not the machine, because there is where malice will be.
I get what you are saying and appreciate that people consider those with the same point of view intelligent.

Maybe a better term is intellectual, which a bit narrower. For instance a super-intellectual AI beats humans at random debates the majority of the time. Because it's better read for instance. It's conceivable. It's also conceivable that such an AI gets side-lined. Like no one wants to play one-on-one against Kareem Abdul Jabbar. It's just no fun.

Usually a higher IQ score relates to a higher intelligence.
Or more rote memorization of question templates.
 
Or more rote memorization of question templates.
Well for the IQ test it shouldn't matter which questions are asked. It can be for any domain. But I guess it helps assessors in taking test quickly if questions are recognizable by many people.
Having standardized types of questions creates this bias problem, and you can actually train for an IQ test that way. But that problem is somewhat negated if everyone is allowed to study for it.
The fun part about an IQ test is that you can literally blindly pick any set of questions from existing tests. Have a group of people answer them and you'll see that intelligent people score highest all the time.
 
I'm sleepy which usually it impacts my thinking process. But I still wrote down my thoughts and didn't read them afterwards, enjoy ;)
Thanks, and sorry if I disturrbed your sleep.
I think I disagree with the idea that intelligence doesn't exist and it's a value we attribute to something. I think the over attribution of value to intelligence itself does cause problems.
In this case I use the traditional sense of 'intelligence', which would be to learn and apply knowledge and solve new problems. Which can be measured with an IQ test.
I think IQ tests have some value, but it's little value.
IQ tests need constant adaptation to change of human skills, and are limited to somehow measuring human intelligence. I don't think they apply to non humans.
So it's somewhat silly to measure AI with and IQ test. And even for humans they're not uniformly motivated to perform in IQ tests.
For example I think in average dogs have more emotional intelligence than humans. But they don't have hands, and they have a small language capability, so
you'll never be able to tell how much intelligence they have in other areas.
Now, cats, on the other hand, I'm not sure if they have emotional intelligence, or they don't or they don't want to have it, or they have it but they don't want to show it,
or why the hell should they show it to you here and now ?


An IQ test can be any random set of problems which you provide to a group of people and have those people solve them.
The people that do well in such tests generally have a higher capacity to store and utilize knowledge and solve problems that they haven't encountered before. Usually a higher IQ score relates to a higher intelligence.
These kind of test could be applied to anything capable of solving problems. There can be obvious bias in which problems are selected for the group to be tested, which I think is a where the IQ testing might sometimes fail us.
II think the bias is too large and too unavoidable, IQ tests are just a group of people systematizing their bias on what intelligence should be.
The advantage of IQ tests is that they allow you to compare different humans in a kind of objective or repeatable way, but whether the results relate or not to real intelligence is impossible to establish because intelligence doesn't have a good definition.

So I'm not sure how much the lack of agency is linked to intelligence.
I'm sorry if I wasn't clear. I didn't mean that agency is linked to intelligence. I meant that intelligence doesn't exist, agency does exist, and I can't imagine a machine having agency.
In order to imagine a machine having intelligence I should first imagine some definition of intelligence, and I think I'd fail. So for me whether a machine is intelligent is more like a meaningless question.
At work we sometimes discuss 'Quality'. How well a product fulfills the need of the user when they perform a certain task. But the needs to those users differ, so many characteristics need to match for this purpose. For example: does it look nice, is it durable, etc.
So the purpose of a product and the needs of a user need to match. And I think intelligence is somewhat like this, as in it's a characteristic of a person. And each person solves problems differently, so the problem needs to match the persons abilities. An intelligent person just usually scores better on more problems in general. And if the test is biased, it might just perform exceptionally well in the problems which are selected. But I guess the purpose is the bias, most people encounter similar challenges, so they need to at least do those things well.
But how can humans be unbiased in judging non-human intelligence?

As you mention, this Moflin (see video, mostly linked because it's somewhat cute looking for a 'computer') is made by someone for a reason and it lacks agency. It mimics intelligence by using AI. But it outsources many problems to its owner which is smart.
I don't understand Japanese, so I may have missed a lot. I just saw a materialized tamagotchi ?
Do I think this thing is intelligent: Nope. But if it had a way to transport itself and if it could navigate the environment freely to, for example, charge itself; I might reconsider.
Seeing how consumer electronics tend to IoT and vendor control, preventing a device to charge is the only remaining power an owner has over the gadget it theoretically owns.
But if you get a robot to steal electricity from anywhere to charge itself without its owner permission or help, that wouldn't give it agency. It would still be some behaviour the designer or the programming user would have instillled.
Why should a machine care whether its battery is full or empty. If it's full it'll do whatever it does, if it's empty, it won't. There's nothing intrinsic in a machine that makes it want one of the scenarios.

You could state that this is Moflin's agency. And agency is one's independent capability or ability to act on one's will, like to charge yourself before your battery dies.
Why should a machine have a will ? Why should it want to be charged ? If it's programmed to be charged it might try, but that's the programmer/designer will, not the machine's.
But people are limited in their agency by many factors and we usually don't consider them less intelligent due to it.

An intelligent person can be jailed for example and being jailed doesn't make that person less intelligent. You would hope the intelligent part might solve the jailed part but sometimes it's out of a persons control.
Sure. Sorry if I didn't write clearly. Intelligence and agency are different things.
Intelligent people usually have more success in work and IQ is a good predictor for this.
I don't think so. Intelligent people may have problems understanding less intelligent people.
Success is very ill defined, but if I guess what you mean, then intelligence helps less than being average and reckless.
Do you think Trump is more intelligent than Harris ? Or less successful ?
But intelligence is not defined, so you could define some measure of success and then define intelligence as success in work or high IQ or whatever you wanted.
Maybe a better term is intellectual, which a bit narrower. For instance a super-intellectual AI beats humans at random debates the majority of the time. Because it's better read for instance. It's conceivable. It's also conceivable that such an AI gets side-lined. Like no one wants to play one-on-one against Kareem Abdul Jabbar. It's just no fun.
Maybe. But being more read will only help someone (human or machine) beat others at debate when the people judging have some common knowledge with the read corpus.
Being read helps by drawing from other peoples works and using them in the debate as they might apply. If spectators don't know those works, they won't be understand the references, able to see you're beating your opponent, because the read participant won't be able to read or teach the whole works to the audience. For them you'll just being talking nonsense.
So a super-intellectual AI may beat humans at random debates according to its designer, but other humans may think it didn't beat those humans.
And an olive tree won't beat any human or AI in a debate, but it might live longer than any and don't care a bit. It might even "beat" Kareem Abdul Jabbar if he was fool enough to attack it unarmed.
So, yes, side-lined, like does a submarine swim better than Michael Phelps ? Or than a dolphin ? Does a submarine swim ? Who cares ?

Or more rote memorization of question templates.
Sure. You can do things to improve results in IQ tests that we would have a hard time arguing they improved your intelligence. But you may not even want to have so high a result, Or not always, or not some badly as someone else.
I mean IQ-tests are better than a dice roll, but they still give a lot of false sense of security.
 
  • Like
Reactions: rSl
But if you get a robot to steal electricity from anywhere to charge itself without its owner permission or help, that wouldn't give it agency. It would still be some behaviour the designer or the programming user would have instillled.
Tell us a behavioural trait of yours, that cannot be traced back - assuming an omniscient observer - to external influence!
Why should a machine care whether its battery is full or empty. If it's full it'll do whatever it does, if it's empty, it won't.
Why should a human care whether its stomache is full or empty. If it's full it'll do whatever it does, if it's empty, it won't.
 
Tell us a behavioural trait of yours, that cannot be traced back - assuming an omniscient observer - to external influence!
If all behavioural traits were attributable to external influence, education would be an exact science. Some traits might be, some not, and there's no certainty about which or how.
The closest form of external influence suppressing natural traits and turning people to machines is military training (or some kinds of it) and even that doesn't achieve its goal.
Why should a human care whether its stomache is full or empty. If it's full it'll do whatever it does, if it's empty, it won't.
Because the human dies without a chance to resurrect and the machine remains functional once recharged.
Because most humans want to live (although some do not, and some commit suicide by stopping eating, and I have no knowledge about it but it looks like a very difficult form of suicide, precisely because a human is not a machine).
People want things, and machines don't. That's what makes CEOs want to replace human workers with robots, even if robots cost more to produce than humans.
 
Last edited:
Now the race begins between the "Winter is coming" and DHL : Got me a pair of some Workboots whit Winter Sole and a Turning weel as an Closing Mechanism: (Like Marty MC Flys Sneaker ^^ ) ..
 
  • Like
Reactions: rSl
The LLMs that are called "AI" aren't really intelligent. They don't understand, they don't reason, they don't learn. They just predict what word is most likely to come next. This is enough to mimic intelligence, similar to the counting horse. (And just like for the counting horse, it is still an impressive feat)

As for IQ tests, they're good at measuring a "kind" of intelligence, but many aspects of intelligence aren't covered by the tests.
 
  • Like
Reactions: rSl
  • Haha
Reactions: rSl
According to Apple Weather, Winter is Coming on Thuesday, so they have still 3 Days left, but According to the DHL App, its still "Status Open" ..
Even Olight got allready picked up, and they had a lot Packages..

Usualy DHL works for me just fine, but they have the big issue that they use Subs for Stuff like Picking up or Post Station Spreading (or how its called) ..

I was once driving for an Sub for DHL, was a bit too slow one day, so i did lost the Job , but spend the Money for the Pandora Preorder , so whitouth DHL, i where not Part of this Comuninity ^^
 
  • Like
Reactions: rSl
Where do get those from, if not from external sources?

If you can't tell from which external source, or in which way or what effect will each input have, it's hard to believe they come form external sources.
There is no shortage of examples of people growing in similar environments and developing different characters.
For me it's easier to believe that people do what they want as far as they can than believing they are determined but the mechanism that determines them is beyond my comprehension.
Determinism isn't determinism if it doesn't determine how it works.
Call it Ockham's razor, atheism or whatever you want. Weather is much more predictable than psicology and we still fail.

But what does that mean?
It means what I want it too mean, of course ! :)

That people decide what actions they take in ways that you can't predict.

It may be deeper than that, but I'm not sure I want to go deeper with you on this.
Post automatically merged:

According to Apple Weather, Winter is Coming on Thuesday, so they have still 3 Days left, but According to the DHL App, its still "Status Open" ..
Yeah, Thuesday is the typical day a delivery service would tell you as the delivery date.
You're left wondering whether they mean Thursday or Tuesday, and they eventually come Wednesday.
Typical.
 
it's hard to believe they come form external sources.
Your entire existence stems from external sources. So I'd rather say, it's hard to believe they come ultimately from internal sources.
There is no shortage of examples of people growing in similar environments and developing different characters.
There is a shortage of pairs of persons, that have the same beginnings (same genes, same gene expressions) and exactly the same sequence of external stimuli.
 
One thing you have to say about AI coding is they are very FAST. So it could be that AI's do all the coding and developers are reduced to code reviewers and testers.
 
Back
Top