the open pandora 2 must use an amd apu!


I guess you don't understand what a CPU register is then. It's not "pretty much memory", and certainly has no connection to RAM addressing. The registers are storage variables that are tightly integrated with the majority of the computational work a CPU performs.

x86 originally only has 8 registers and not all of them can be used for all tasks, most prominently one of them is reserved for a stack pointer. When you don't have enough registers to store all of the local computation performed it means you have to perform extra spills to move registers to and from memory, which is overhead. The amount of registers you need to get work done to avoid spilling, called register pressure, gets worse as you increase instruction latencies and execution width. That's because you need to perform more operations in parallel. Modern desktop processor get around this by employing a technique called register renaming in conjunction with out of order execution, which transparently moves things around in non-visible registers to improve scheduling. Atom is in-order and doesn't have register renaming so you really need the extra registers architecturally. 7 is too few.

The extra registers especially make a big difference for recompilers like found in high performance emulators.
 
I guess you don't understand what a CPU register is then. It's not "pretty much memory", and certainly has no connection to RAM addressing. The registers are storage variables that are tightly integrated with the majority of the computational work a CPU performs.

x86 originally only has 8 registers and not all of them can be used for all tasks, most prominently one of them is reserved for a stack pointer. When you don't have enough registers to store all of the local computation performed it means you have to perform extra spills to move registers to and from memory, which is overhead. The amount of registers you need to get work done to avoid spilling, called register pressure, gets worse as you increase instruction latencies and execution width. That's because you need to perform more operations in parallel. Modern desktop processor get around this by employing a technique called register renaming in conjunction with out of order execution, which transparently moves things around in non-visible registers to improve scheduling. Atom is in-order and doesn't have register renaming so you really need the extra registers architecturally. 7 is too few.

The extra registers especially make a big difference for recompilers like found in high performance emulators.
thanks you for the information, the extra spills you speak of, what memory would it be transfer the from, on die cache or main memory? and how much of a slow down would that be?
 
thanks you for the information, the extra spills you speak of, what memory would it be transfer the from, on die cache or main memory? and how much of a slow down would that be?
Spills generally go to the stack. The stack generally stays in cache.

It's a slow down because you spend extra instructions that wouldn't have otherwise (costing fetch, decode, issue, etc width), and there's a limited amount of resources the processor has to perform loads and stores vs ALU operations. Usually it can do more of the former than the latter. For instance Cortex-A8, Cortex-A9, and Atom can all do one load or one store per cycle, but can do two ALU ops. Sandy Bridge can do two loads or one load and one store, but three ALU ops.
 
IIRC there is also the 64-bit wide data path, increasing speed of some operations http://en.m.wikipedia.org/wiki/64-bit_computing
That's true too, but this is probably not a big win for most applications outside of a few HPC domains. You don't usually benefit much from 64-bit integer arithmetic, and sometimes the SIMD ISA provides some of that anyway. If the machine lacks SIMD entirely you can use 64-bit integers to perform some pseudo-SIMD in a limited fashion, but that's not pertinent for Atom or current ARMs.
 
Recently held a course in "Programming Languages and Paradigms" for third year systems development students. We started out with a lecture and a lab on assembly programming (being as purely imperative as you can get). A lot of these very accomplished C#/Java programmers got real humble, real quick, I can tell you :)

We abstract away too much of the computer these days.
 
Recently held a course in "Programming Languages and Paradigms" for third year systems development students. We started out with a lecture and a lab on assembly programming (being as purely imperative as you can get). A lot of these very accomplished C#/Java programmers got real humble, real quick, I can tell you :)

We abstract away too much of the computer these days.
but without those many layers of abstractions, as much people wouldn't be using computers today...
 
Yes, of course. For the ordinary user, we need to abstract away all the hairy stuff. The problem is when we do that even for those who are supposed to know the machine.

This isn't aimed at you in particular, it's just a general observation - modern easy-to-use languages gives programmers who don't know what their code is actually doing.
 
Yes, of course. For the ordinary user, we need to abstract away all the hairy stuff. The problem is when we do that even for those who are supposed to know the machine.

This isn't aimed at you in particular, it's just a general observation - modern easy-to-use languages gives programmers who don't know what their code is actually doing.
but do we really need to know what is going on to be able to use it, analogy, does a race car driver really know everything(lowest level), like the physics and all the engineering?
 
No you don't have to know about it in order to use software written for it, no one is making a claim like that so why bring it up?

On the other hand, if we're going to talk about what hardware should be selected for an upcoming device then it helps to know something about it..
 
No you don't have to know about it in order to use software written for it, no one is making a claim like that so why bring it up?

On the other hand, if we're going to talk about what hardware should be selected for an upcoming device then it helps to know something about it..
that post was just me wondering if anyone actually needs to completely understand something from a deep technical pov, in order to understand the rules enough to take advantage of it, thats why I brought up a car, not many ppl really know how they work but alot can still take those things to their limits. exophase just relax I mean no harm...
 
Personally I'm not fond of x86 from architecture point if view, it's just awful to program for. It's an extension of 16bit CPU (8086) instruction set that itself is an extension of 8bit CPU (8080/8085) instruction set. That has a bunch of instructions (BCD for example) that nobody ever uses, compilers nor humans. It has various crazy 16bit modes that are not useful for anything except early boot steps (recent Windows no longer can switch to 16bit modes to make use of them). I don't know how much of that junk mobile x86s are still carrying around, but it's something ARM doesn't have to do. From all more popular instruction sets out there it's the most awful for me and I'd prefer it to just die, but of course this is not going to happen with giant Intel behind it, sadly.
 
Last edited by a moderator:
No you don't have to know about it in order to use software written for it, no one is making a claim like that so why bring it up?

On the other hand, if we're going to talk about what hardware should be selected for an upcoming device then it helps to know something about it..
that post was just me wondering if anyone actually needs to completely understand something from a deep technical pov, in order to understand the rules enough to take advantage of it, thats why I brought up a car, not many ppl really know how they work but alot can still take those things to their limits. exophase just relax I mean no harm...
No, the driver (i.e. the user) doesn't have to know. However, the mechanic and the car designer had better know, or else things will go badly.
 
Personally I'm not fond of x86 from architecture point if view, it's just awful to program for. It's an extension of 16bit CPU (8086) instruction set that itself is an extension of 8bit CPU (8080/8085) instruction set. That has a bunch of instructions (BCD for example) that nobody ever uses, compilers nor humans. It has various crazy 16bit modes that are not useful for anything except early boot steps (recent Windows no longer can switch to 16bit modes to make use of them). I don't know how much of that junk mobile x86s are still carrying around, but it's something ARM doesn't have to do. From all more popular instruction sets out there it's the most awful for me and I'd prefer it to just die, but of course this is not going to happen with giant Intel behind it, sadly.
I want to agree with you here, but then I think of all the cobol programmers out there that are commanding huge sums of money working on legacy systems.

16bit should be in the dustbin of history by now, but really I'd be surprised if there weren't people still using it for things they shouldn't be using it for. Although, most of the time when I need to execute those programs it's because the installer was 16bit and I could probably run it with QEMU.
 
It's good AMD is making a more serious effort to get into tablets, at least. It feels like they've barely been trying up until now..
 
It's good AMD is making a more serious effort to get into tablets, at least. It feels like they've barely been trying up until now..
yeah buddy! 100% gfx improvement bumps it into 18watt apu performance level.
Did you see the tegra 4? or the Snapdragon 800? or the Exynos 5?
? what does that have to do with twice the gpu performance?

super yeah buddy!!!

https://www.youtube.com/embed/FruxOZ9Nfp0?feature=oembed
 
Last edited by a moderator:
Back
Top