Able to upgrade not just the speed but manufacturer of CPU?


I am pretty sure it is something like that.

Hey, look, an SoC. See, I can be on topic. *searches for a previously unmentioned SoC to make a link before anyone notices this post*
 
I am pretty sure it is something like that.

Hey, look, an SoC. See, I can be on topic. *searches for a previously unmentioned SoC to make a link before anyone notices this post*
too late
 
I did find this, which I don't remember reading before.

What are the major problems associated with switching to 64-bit ARM? (Someone probably already listed this, so feel free to point me at that post.)
 
As far as I remember, the only problem would be that the OS (read: The Linux kernel) has to be 64-bit. That's it. Since only a minority of the Pyra specific code would be ARMv7-A specific, it shouldn't be a problem.
On an application level, ARMv8-A is totally compatible with previous versions.
 
As far as I know, the kernel can still be 32 bit - you just need something in the bootloader to tell it to run 32 bit opcodes, not 64 bit ones. The problem then is getting any benefits out of its 64-bit perforamance - IIRC you can only switch between 32 bit and 64 bit on some sort of context switch, and doing that might be hard when it comes to the kernel's multitasking, meaning you can only switch to 64-bit wholesale, once the kernel and all apps come in 64-bit builds. Hopefully I'm wrong about that though.
 
Warning : my math and logic are probably wrong, so please go get your matches, stake, faggots and a town square. :oops:
Ready ? Time for bullsheet theory.

I see "octacore", I think "most programmers won't optimize for this many cores".
That makes me think of a pretty stupid question, but I wonder anyway :

Take an n-core processor.
For the simplest task, it takes t cycles.
When each core runs in parallel, it always take t cycles for this instruction to get done on one core.
And the CPU runs n tasks in t cycles as long as they're done in parallel. Otherwise, a set of n tasks will run in n*t cycles, even if they don't need each other and can run independently.

Introducing a t/n delay between each core, and a small instruction allocator running n times faster than the cores.
When it is not bypassed (when a code is not optimized for multi-core), the allocator sends every instruction to a different core, clockwise to the delay.
Thus, the cores are forced to run code in parallel, but it still gets an average of t cycles for a task to run.
A set of n independent tasks will not run in n*t cycles, but in just t cycles. And if some need the end of the others, they just don't run until these are completed, so it shouldn't take longer than without the allocator, but it may add up to t delay (or maybe this could exponentially increase ?).

So while a totally dependent program may take t cycles longer to run, if it could be optimized but hasn't been, it would run in up to n times faster with up to t delay, still much closer to the running time of optimized software.

Of course, that would mean deciding on-the-fly which code is supposed to be done after one another, and having speedy hardware just for this.
But our OSes do a similar thing when running parallel tasks AFAIK. It would be done on a smaller scale and managed by the hardware.
And it would mean that at its simplest, all cores run and at the same speed. But add a fair bunch of complexity and it could dynamically manage disabled and slowed-down cores.
Also, bypassing it would mean using the CPU just like any other one.

So I just thought of that, am only giving it an afterthought as I write this, and am pretty sure it's either unreliable, inefficient or too costly for the possible improvement.
But can you tell me which one(s) and why ?

At least, I hope I entertained you with my utter stupidity. Here's a PotatOS :
reconstructing_potato_glados_by_zareste-d3gb2ok.jpg
 
Last edited:
I see "octacore", I think "most programmers won't optimize for this many cores".

I see this SoC and I think it's another server effort that doesn't even have a built-in display controller (let alone GPU)

Server tasks tend to be well suited at utilizing a lot of cores, which is why you see single Xeons with like 22 cores these days.

So I just thought of that, am only giving it an afterthought as I write this, and am pretty sure it's either unreliable, inefficient or too costly for the possible improvement.

The thing with CPU cores is that they depend a lot on locality of reference for good performance. That's why they store a limited amount of data in registers then multiple levels of cache, with at least the first level dedicated only to the core. Because of this, the different cores are relatively far apart from each other and tend to take at least dozens of cycles to communicate. So if core 0 runs one instruction then core 1 runs the next instruction and depends on the result from core 0, there will be a huge overhead in getting that result. If the cores are changed to be more tightly coupled with each other (so they could talk to each other faster) their critical paths would become significantly slower and therefore their individual speed would suffer tremendously.

There is an area of CPU design research called speculative multithreading which attempts to split a single program among multiple hardware cores/threads, but it works at a much higher level than what you're describing. To really work acceptably the different threads must run huge blocks of code (maybe thousands of instructions) before synchronizing. The hardware still needs to be able to detect conflicts and either perform cross-core communication or roll back to a safe state when they happen. But the key point is that this must happen very rarely; the software and hardware needs to perform extensive analysis to predict where divergence points have a high chance of being truly independent and not performing them otherwise. This would probably involve a combination of software dynamic translation and some level of hardware profiling and prediction.

A company called VISC claims they have a processor in development which uses this technology. They've made very big statements about the performance and efficiency of their design. We'll see how much any of it pans out.
 
This was very interesting. Thanks.
I suppose we may get quantum computers before we reach the ceiling of what we can achieve with our current technology. After all, Wozniak found a way to save one or two chips in the Apple ][... more than 30 years later.
 
This was very interesting. Thanks.
I suppose we may get quantum computers before we reach the ceiling of what we can achieve with our current technology. After all, Wozniak found a way to save one or two chips in the Apple ][... more than 30 years later.

That's really interesting about Woz, do you have a link to an article or something about that?

Hate to be a wet blanket but I wouldn't hold out much hope for quantum computing. If it happens in much capacity it still might not be that generally useful. And if you thought learning how to program for threads was hard, get ready to totally change the way you view everything..
 
That's really interesting about Woz, do you have a link to an article or something about that?
I haven't checked Kev2442's link, but I remember him either telling someone still active with A2 stuff about it, or making a passing comment and one of the A2 people asked him for extra info. It may have even been in a video. (I am a fan of 8bit Apple IIs, so I follow this sort of stuff, and consume as much new information as I can find.)

I will post additional links if they seem worth it.
[doublepost=1469128888,1469128727][/doublepost]Ok, it was an Apple I person. The link in the article points at the discussion, but that is not where I originally read it. Either way, it made me smile. I wish I had that man's knowledge, skill, and passion.
 
The link in the article points at the discussion, but that is not where I originally read it.
Exactly the same for me. I love everything from older machines like computers and video games especially from the 80s and 90s : the history, the inner workings, the optimization and workarounds, and simply the shapes, colors, feel. Each one really had a personality : different sound channels, color palettes, controllers...
Don't get me wrong : I also like the current machines and don't own any old system, but there's just something about the paths we took to get to where we are now. And of course I'm really interested in the road ahead.
 
As we move forward a lot of the differences feel like they are gone since the user can have a similar experience with most of what they get their hands on. That is what really appeals to me about the older machines (their distinctiveness). Well, one of the things anyway. I use modern hardware for my daily drivers, but the old stuff is still out there for those that want to mess with it (and possibly push the limits of what they can do with it).
 
We all know that Intel announced a while back that they were ending future development of the Bay-Cherry Trail architecture line. However, ending those lines does not necessarily mean that they're leaving the tablet/handheld SoC space. I stumbled into an article about the upcoming Kaby Lake CPUs.

Ignoring the Apple specific commentary...
http://appleinsider.com/articles/16...by-lake-processors-delivered-to-manufacturers
"Kaby Lake will have five classes of processors, with two classes for mobile devices and tablets; one for laptops; and two spanning servers, high-power workstations, and desktops."

And in this article, it looks like there will be a 4.5W part:
http://wccftech.com/intel-14nm-kaby...16-256-mb-edram-hseries-91w-kseries-unveiled/

So - no future development as a specific independent mobile processor line, but as a class of 'normal' processor for the same segment yes? We should know a lot more and parts should exist in the wild by early 2017 - which would be around when anything like this could even be contemplated.

Could be an interesting possibility.
 
but if they make a full CPU+GPU combo with RISC-V, that would be very tempting ;)
That little piece of silicon is just a small management unit (it is actually responsible for the whole "signed firmware" BS!), it has nothing to do with the actual calculations done by either. It's architecture shouldn't matter very much, it wouldn't surprise me if they choose it simply because they save money on license fees and the like - using ARM, MIPS etc is expensive, especially if you want to be able to modify the base architecture. They'll hardly use RISC-V for anything else.
 
Why wouldn't they use RISC-V for the entire CPU portion? They'd save even more licensing fees.
 
Back
Top