PPC vs x86 has nothing to do with performance considerations here. They're instruction set architectures, not CPU micro-architectures. The PPCs in none of the current consoles represent anywhere close to a high-end design like IBM's current Power series, or the Mac G5 of old, or even the G4. It's like saying that Atom reflects high-end desktop performance because it's x86. In fact, the PPC cores on PS3 and XBox360 are pretty similar to Atom in design (in-order, only dual issue, not very wide, relatively long pipeline), while the clock bumped Gecko CPU in Wii is more sophisticated but still slow and old (more like G3 level technology with better SIMD).
The reason PS3 and XBox360 are using PPC at the center of their control logic isn't because x86 wasn't good enough, it's because they don't really need a lot of power here at all - instead the power requirements have shifted in the same place they always have for consoles, to graphics and (more recently) to vector coprocessors. For the bits that run the game loops and higher level tasks, the concern is more on getting something cheap and simple to implement, and therefore we get simple PowerPC cores. It wouldn't have been much surprise if they were simple MIPS cores instead, although PowerPC is a nicer ISA than MIPS by far.
it takes about 10 seconds to check your facts
How many seconds does it take to second guess everything you write? Sometimes you're just going to think you know something that you don't and are going to have to deal with being wrong about it. For instance, several things said in your post are wrong:
if the x86 in question is a 386 yes.
x86 has been outperforming PPC for quite some time now, its the reason Apple switched to x86, you could not get a PPC in a laptop that would perform decently without burning a whole through the casing or draining the batteries in 10 minutes.
I'll ignore the ridiculous hyperbole behind "if it's a 386." Yes, Intel x86 was outperforming and providing better perf/Watt than the cores Apple was managing to get at the time, but IBM is still (to this day) producing high-end PPC that can easily give high-end x86 a good run, read up on:
http://en.wikipedia.org/wiki/POWER7
And note in the top500:
http://www.top500.org/list/2010/06/100
All of the "BlueGene" systems which are PPC based.
another problem with RISC processors & 64bit today is that memory bandwidth is the main bottle neck, its the reason that ARM created the Thumb instruction set.
current processors cant get their instructions fast enough nor can fit enough in the cache to work efficiently,
x86 and ARM-Thumb has the advantage of having smaller instructions therefore fitting more inside the same-sized cache and reading less from memory for the same amount of work done.
The main reason ARM created the Thumb instruction set is because ARM7 SoCs were being fitted with narrow 16-bit external interfaces and a 32-bit wide instruction set was killing performance with instructions fetched directly from external memory. With the move to 32-bit cache in ARM9 the weak Thumb ISA went all but unused. Thumb-2 continues to persist today for saving flash space in the embedded sector on Cortex-M series CPUs. In Cortex-A series it again goes all but unused, and in benchmarks shows practically the same performance despite having a lower icache footprint.
icaches continue to grow in size, and meanwhile for large threaded loads a lot of threads eventually end up sharing the same instruction streams. The fact is that code density isn't a huge issue anymore, and also, I've yet to see a 64-bit ISA that features larger code than a 32-bit predecessor (they all still use 32-bit instructions, typically). Another consideration is that although x86 had reasonable code density in the early 80s, these days the ISA design is very poorly optimized for use cases that matter, like SIMD, conditional execution, 64-bit, extended address modes, etc so it has lost quite a lot of any advantage it had in this space.
Stephane Hockenhull said:
RISC performed well when memory was at the same speed to half the processor speed, now memory is more than 10 times slower.
if you want a PPC system to perform decently, you need an insanely large data bus and large cache which is very expensive.
What needs very large caches to perform well are CPU designs with huge in-flight windows, see Pentium 4. Memory isn't more than 10 times slower, that's a misconception - in terms of bandwidth it's still hovering within a few times of CPU speed (when looking at 32-bit transactions per clock). The latency is way up there, but architectures do more and more to hide latency, including good hierarchical cache design and aggressive prefetch.
But basically you're saying that PPC is at an extreme disadvantage for having 32-bit instructions, right? Or because you think that it requires a lot more instructions to get the same sort of stuff done for being RISC? Because that too isn't really the case for usual compiler output.
Stephane Hockenhull said:
today's x86 processors are not even CISC processors anymore, they're specialized RISC cores interpreting the x86 instruction set with a minimalistic dynamic recompiler, all that complexity still pays off because of the size of x86 the instruction stream vs equivalent 32bit RISC. and 64bit RISC is much worst.
First of all, the only x86 designs that had "dynamic recompiler" elements were Pentium 4 with its trace cache and Nehalem with its loop buffers - an important part of being a dynarec is that the results of the recompilation are somehow persistent. Changing the representation temporarily in-pipeline flight does not constitute recompilation and probably most CPUs do this to some extent.
Second, whether or not a CPU is RISC or CISC has nothing to do with its internal representation of opcodes and is strictly defined by its ISA - I see this claim about "x86 is RISC now" pushed all the time, and this is not merely a semantic issue because the nature of the ISA heavily limits your computational expression at the front-end (despite the belief that CISC is more expressive than RISC, actually more registers and 3-address arithmetic is more expressive, not to mention lots of other things that have made it into more modern ISAs). Although the disadvantages are less pressing in an OoO design.
Third, you make it sound like the industry has stuck with x86 because of its code density, and has been willing to go through lots of hoops translating it internally because of this advantage. That of course is nonsense, the industry has stuck with x86 for legacy/backwards compatibility and that's it. Intel themselves have tried to move away from x86 in the past, and no one actually believes that code density outweighs the disadvantages of the ISA.
Again, 64-bit and 32-bit RISC almost always have the same instruction width. You use a little bit more dcache when storing 64-bit pointers instead of 32-bit ones (x86-64 is hit with this too), but it's not a huge issue.
Stephane Hockenhull said:
its not a question of processor design, today its only about memory bandwidth & cache size.
Ah, really? Let's have a case study:
Pentium-M: 1.6GHz, 400MHz FSB, 2x 32KB L1 cache, 512KB L2 cache (yes they exist, look for them), 64-bit external memory interface
Atom: 1.6GHz, 533MHz FSB, 24KB/32KB L1 cache, 512KB cache, 64-bit external memory interface
Both have similar cache hierarchies, prefetching capabilities, memory interfaces.. Pentium-M has a little more L1 dcache, Atom is paired with faster RAM.
Which do you think outperforms the other by 2x in typical benchmarks? Pentium-M is much faster by virtue of being out of order and wider, ie processor design, not memory bandwidth and cache size.