The game industry has gotten pretty much nowhere so far with parallelizing game engines, and most average users can't see any real benefit from multi-core processors yet, because they don't multitask as heavily as someone like me or you likely does-- compounded with that most software is single-threaded. The benefits from say, a 4- or 8- core processor won't truly be seen for most users until CPU architecture allows for the splitting of one thread into many, for execution on general-purpose or specialized cores.
That's a problem of programming paradigm. People don't like dealing with multi-threading because it involves worrying about synchronization, but that doesn't mean it can't be done in a lot of places where it isn't.
The idea's been tossed around before, and it is rumored AMD and Intel are both working on the concept. Once this happens, a processor's speed can scale in linear fashion with the number of cores included, without concern for software-- as long as heat and power issues are kept in check.
I honestly don't really see how a single thread could literally be split into many in any conceivable fashion, but if it is possible then that even further solidifies that lack of multi-threading is a programmer problem.
This hasn't happened yet, and the XB360 and PS3 also lack the ability to derive instructions for multiple cores from one thread. Most engines do not use anything but one of the XB360's cores right now, and it will stay that way for quite some time.
Again, that's a programmer problem, not a hardware one. Clearly the big companies are trying to force a paradigm shift because multi-core is the only real way of the future at the moment, for classical computing anyway.
Many developers will never bother trying to use all 3, but let's face it, some code just does not parallelize effectively at the programmer's or compiler level-- it has to be done at the hardware level or not at all.
Thread level parallelization is a lot different from the kind of vector level parallelization you're talking about, and I really don't think that hardware can split things into threads in a way that a programmer can't. Actually, the idea sounds ridiculous since programmers know much more about the typical data flow of a program than hardware can tell at runtime.
Due to the present situation with multicore processors, one faster CPU core would probably have been more cost-effective, while providing higher performance per programmer hour invested.
Obviously this isn't about "performance per programmer hour", like with PS2 both Microsoft and Sony are making programmers work significantly harder so they can achieve better results in the end.
But given that a vast majority of a game's budget is spent on graphics anyway is it such a huge burden to shift some of that back to programming?
I would consider this a more logical course of action, at least in a game console. If we were talking about marketing the PS3 as a node for a supercomputing cluster, its design would make sense, but as a game console, it really doesn't.
And yet if you compare what farms with high computing capabilities often actually do, such as render farms, with what GAMES do...
Yes, the PS3's "Cell Processor" has considerable potential. The problem is that while this power, if properly tapped, will result in extremely high computational ability, in a game scenario it is not realistic. I am well aware vector/floating point math comprises the majority of what a game engine does (I've worked with 3D engines firsthand; for example, Quake has no support for any integer data types, period. It's almost 100% FP code),
Who ever said anything about floating point vs. integer? Floating point usage is a downright given.
but this is not the reason the Cell processor is poorly suited to a game. A vector processor's structure like the Cell's is as such. A "master" processor passes instructions to multiple RAM-limited vector processor cores-- generally to perform one operation over a MASSIVE data set. This allows very fast execution of repeat operations, such as complex math, compression, encryption, code-breaking, etc. as I have said. This is because the processors will not need to recieve new instructions frequently, will not access the Main RAM frequently, and can just churn out results of the same variety over and over with little to no intervention by the "Master" processor.
Okay, for one thing the Cell processor has internally extremely high peak bandwidth, much higher than any RAM subsystems. For another thing, the SPEs are general purpose processors that don't need to be given static control flow commands from the "master."
In a game situation, the same task is not being performed again and again.
Certainly it is. Within a single frame there are many of the same operations being performed over a huge set of data. Transformation and lighting obviously, but even moving away from graphics rendering, physics and collision...
Every bit of game logic executed changes the situation to be processed and the math that must be done on the next frame.
And yet the working data set within a single frame is huge.
Every processor's results must be passed back to the Master and will fit into a larger, constantly-changing puzzle. This means the Master core must be constantly recieving interrupts and slowing its own work to listen to the vector units and pass them new instructions.
Like I said, the bandwidth is massive, and obviously they can communicate via memory and not interrupts.
Due to the way the vector units are tied to their own memory, they cannot operate independently of the Master core effectively. Since the situation they deal with is changing so constantly, they cannot operate even approaching peak efficiency for crunching real game logic. This will likely make them ineffective for real-time physics processing. A physics processor in present iteration isn't a vector processor, it's one very fast Vector FPU with a wide, fast path to memory, and that's it. Just like a GPU-- which is why nVidia and ATi are both working on implementing use of a GPU as a physics processor, or a second GPU core on the same die as one.
And yet the physics cards available now are very similar to what we have going on with PS3 - they're graphics card shader units (parallel vector units, just like the SPEs) that pump data back and fourth to a main CPU. The difference is that such physics cards have to go over much slower PCI-E bus, which is quite a bit different than the scenario you described, where Cell has the obvious bandwidth advantage.
I have established you will not achieve peak efficiency or close to it in a real game scenario, but I must address your statement regarding time invested vs. results obtained-- yes, a skilled programmer can find a way to make the most of this situation, while a less skilled or lazy programmer will do a miserable job or skip the vector units altogether and code just for the Master core for equally abysmal performance but less hassle. The problem is, game development is usually rushed these days, with problems fixed in patches after the game is forced out the door in a frenzy. By the time a game is developed that is a veritable marvel of programming prowess, three games could have been built in the same time with less perfectionistic and effective methods. Companies care less about what is more ideal and beautiful code and perfect use of the hardware, than they do about their bottom line.
Their bottom line is all about remaining competitive and getting the most out of hardware is the only way to do this. When companies like Sony and Microsoft drive more difficult to use (but more powerful) platforms the end developers don't really have a choice but to utilize it well, if they want to be competitive.
Such code is also highly specific to the PS3 architecture and will not port cleanly to the XBox 360 or Wii. Most studios would much rather keep easily-portable single-threaded code that will run 'copy-and-pasted' into a port that can be sold for the Wii, XB360 *and* PS3 than worry about optimizing for one or the other.
They might want that, but they won't get that if they want their games to be competitive, like I siad.
That's just a revenue vs. time equation right there, and the idealistic programmer generally doesn't win in such situations.
It doesn't really matter, why should I care about what game companies can do to make money more efficiently if it means churning out inferior games?
Regarding Northwood, I never stated heat and power consumption impacted IPC.
No, you just listed those things with it, even though they were 100% irrelevant.
I stated a longer pipeline necessitated more transistors, which produced more heat and used more power. I used Northwood as an example because it was not clock-for-clock competitive with Athlon XPs and 64s, or 'Core'-series architectures from Intel, for a reason. It is often regarded that a 3.2 GHz PowerPC chip used in the PS3 or XB360 is necessarily competitive with a 3.2 GHz Pentium-D or other modern dual-core processor, while in actuality, the performance may be dramatically lower-- such that a 2.8 GHz Northwood P4 did not stand up to a 2.8 GHz Athlon 64. For the two to have relatively equal playing ground, you'd be looking at more of a 4.1 GHz+ P4 to match the A64 at 2.8 GHz. Another example might be the Z80 vs. the 6502 to look at a more classical example-- the Game Boy had a Zilog Z80-based processor at nearly 4 MHz that performed on par with the NES' 1.79 MHz 6502 and sometimes ran slower, because the Z80 took nearly twice as many instructions to get the game job done as on the 6502's instruction set! It is not unrealistic to think, that with a 2:1 performance gulf, such could exist in the realm where the compared clockrates are not ~2 and ~4 Mhz, they are ~1.5 GHz and ~3 GHz.
I understand what you're saying, so does everyone. But just because it's
possible for a 3.2GHz CPU to perform on the same level as another 1.4GHz CPU, in THIS case I really don't see that as likely. Not a 1.4GHz Willamette.
Oh, and things have changed a lot since the days of early CISC CPUs, in terms of how long it takes to do what.
As for your statements regarding the Saturn, the reason for its failure was not laziness on the part of programmers. There were true design faults in the implementation of the dual-processor configuration. Both CPUs could not utilize RAM at once, and had no cache whatsoever.
Okay, you're just flat out wrong there, both CPUs had 4kb of cache and could map RAM directly with it.
As a result, one processor was generally stalled while the other was working unless RAM could be shuffled between the two chips when neither needed to access it-- which was relatively infrequent. The amount of time required to delegate tasks between the two correctly, like the PS3 situation, didn't equate well to a design timeframe and the bottom line. It was easier to just code for one CPU and ignore the other, or cancel the project altogether and jump ship for the PSX.
Right, so now we're talking about what's easier, as if this is very different from "lazyness"
Other problems with the Saturn involved the fact that it contained not just the twin SH-2 processors, but also a 68000 processor for audio, a complex audio subsystem, and an EXTREMELY complex video subsystem involving two VDPs, and a bizarre quad-based rendering system (not triangle based) which was designed for display of 3D-accelerated display of sprites 'stretched and scaled' by manipulating polygons instead of 2D tile data, which was then applied to more 3D games than 2D, resulting in unneeded complexity in model design and manipulation (nearly all 3D tools were, and still are, based on Triangles, not Quads.) Sega really did have a lot more going for it with the dual-processor marketing than the architecture of the machine in this case. I believe that philosophy made a comeback.
Yes, Saturn was very DIFFICULT to develop for, and contained far more hardware than necessary. I'm not going to argue any of that, because it has nothing to do with what you originally said, which was that one faster CPU would have been more powerful than its two. And I said that unless that CPU was MUCH faster (which was not realistic at that time) that was not true.
EDIT: Is there some kind of forum bug with quotes? This is getting really annoying.