Cortex A15 - 5x faster.


I have to admit that the x86 move always confounded me - I liked the PPC models, personally. :p I always figured it was down to the prevalence of number-related myths, to be honest, but that's probably incorrect.


I'm pretty sure that Microsoft doesn't part-own Apple, though - I was of the impression that they held non-voting stock.
 
... wow, I hate to come off as especially mean, but you've said quite a few incorrect things here >_>

Apple can not currently move to ARM or any other architecture before Windows does so due to contractual situations... the only reason that MAC made the move to x86 is because they were dying and microsoft had to bail them out (by buying >50% of Apple) in order to keep themselves from being a commercial monopoly...

That's a strange, strange view of reality you have there. Microsoft definitely doesn't have a controlling interest in Apple (all of those "I'm a Mac", "I'm a PC" commercials over the past several years since Mac went x86 should have tipped you off).


The reason why Apple moved Mac platforms to x86 is because their beloved PowerPC architecture was not keeping up with process ramps and was starting to stagnate in performance/Watt, and because Intel was rolling out x86 CPUs that were substantially better than the design trainwreck that was Pentium 4. Apple wasn't dying at this point, in fact Mac sales were improving and iPod was doing extremely well. Microsoft certainly played no role in any of this, and if anything with the move towards x86 Apple has done more to distance themselves from Windows since they're now competing in the desktop space more on the merits of their systems software than hardware.

Linux is obviously a competitor as well, but it's not a commercial competitor, so they don't come into play with America and big business...

In fact, Linux (and other large scale open source endeavors like the rest of GNU, particularly GCC, Apache, MySQL, so on and so forth) has a very significant commercial presence; this is more in terms of impact to companies than generated revenue, but there's a bit of that going around as well.

The secondary reason is that PPC is capable of outperforming x86 (as is seen in the PS3... the fact that it has been legally capable of running linux is NOT the only reason military establishments have been using them in server farms... in the realm of raw power, XBox 360 is leaps and bounds behind... by all regards, 360 is an x86 system that is extremely proprietary and locked down... but it's still x86 nonetheless.

There are some very powerful PowerPC systems out there that IBM is responsible for - PS3 and XBox360 are very certainly not examples of them. The PowerPC element in the PS3, the so-called "PPU", is utter weaksauce that can't hold a candle to a 5 year old x86 core. The XBox 360 is essentially 3 of those cores, also PowerPC, NOT an x86 system like you claim - that's the original XBox you're thinking of. These are "in-order" and "narrow" processors that couldn't be further removed from raw power; PS3 gains its computational edge from its SPU SIMD coprocessors.


I don't know if the military has been using PS3s or not, but if they have it's probably because they're relatively cheap ways to get vector processors because of how heavily Sony has subsidized their cost, at significant loss.

When all said and done, the only way I can see ARM making real headway in the desktop market is if they make it non-embedded, or give it a way to easily be swapped for a newer version when released...

Right. First, desktops are by very definition non-embedded, in fact something like the Pandora or even GP2X, Wiz, or Caanoo also qualifies as non-embedded. Embedded hardware means it is suitable for the role of context specific deployment; what I mean by this is that you design both the hardware and the software of a system to perform a particular task, and not to run software of the user's desire. A mainframe, server, desktop, laptop, etc is not embedded. A washing machine, microwave, alarm clock, satellite control system, car fuel injection system, etc is embedded. Modern phones straddle the line depending on how "smart" they are.


Second, ARM doesn't make chips, they sell IP to companies to implement chips. For a variety of reasons, ARM makes sense as part of an SoC. As things move more towards the SoC direction there becomes less and less reason to socket one on a mainboard; ie, considering phones and other handhelds, their costs are low enough that it makes more sense to just buy new ones. As desktops start to cost closer to $200 and under it'll make less sense to follow an upgrade path for them as well (and for most people this never really made sense anyway).


The fact is, desktop CPUs have slowly but surely been moving towards the SoC path as well. First they integrated L2 and higher cache on die, then the memory controller, then the entire "north bridge" (see ie, Hyper Transport on AMD and Intel's equivalent whose name currently eludes me), now they're getting graphics cores, and soon ones that'll be worth being taken seriously. As die size shrinks we'll see more and more functionality moved to this. We're also seeing the replacement of conventional harddrives with SSDs, and removable optical media drives are being phased out entirely as they're replaced by USB drives, SD cards, and downloadable content. So in a sense, while I still don't think laptops will replace desktops, I think that we're going to see desktops become really tiny and really replaceable, and putting a chip on a big socket is going to be contrary to this aim.


So no, I don't think making replaceable ARM CPUs is a big priority, but if it is it's a trivial detail because it just means that whoever implements them has to do it like this. It's not ARM's problem.

that's way too common amongst desktops to be lost by switching to something better but with less mod capabilities... Average Joe will never really know the difference, but they aren't marketing for them simply because they won't know the difference, so there's no reason... they would be targeting builders, modders and custom creators as well as large scale mfgs, but large scale mfgs don't really make the bulk of revenue for this sort of thing...

Well, obviously I disagree. For all practical intents and purposes the modding community exists so long as there's something worth modding. But this is pretty moot because they're a fringe niche in the overall scheme of things. The bulk of those buying desktops don't even have the capability to mod on the scale of their orders, much less the desire. Almost no companies are building or modifying their own machines.

if they did, Intel and AMD wouldn't have so many variations of their processors on the shelves in retail stores.

Completely moot, these variations match the options OEMs give; so long as OEMs are content to solder what they want on boards then it's a non-issue.


Which they generally will be, since they are for laptops and netbooks (okay, laptops have some modularity to some extent, so I guess this will be driven by market demands - but it's seriously only a minor issue)
 
A dual core 2.5Ghz ARM chip would be all most people would need in a Laptop (I wish my laptop was that fast).

hell sure, 640kb should be enough, isn't it?


Intel will be fine with x86 as long as microsoft and MOST desktop software publisher furnish soft only for this architecture...


no matter how inefficient and outdated it is, specially since they are able to sustain some great computational power...
 
I thought that the Xbox 360 was also a PowerPC-based machine?
"The XCPU, named Xenon at Microsoft and "Waternoose" at IBM, is a custom triple-core 64-bit PowerPC-based design by IBM."


lolyes.


One of my friends said that IBM's PPC architecture makes up all 3 of the big current-gen game consoles. I don't think consoles are known for being powerful. They probably use PPC so the games will be harder to emulate or something. It would be easy to make cheap x86 consoles, or even just plug a few USB joysticks into a computer, but apparently people only dev for Nintendo, Sony, and Microsoft, who aren't building such unholy chimeras.


Edit: Somehow I missed a page. Too late to be on the internet.
 
Last edited by a moderator:
One of my friends said that IBM's PPC architecture makes up all 3 of the big current-gen game consoles. I don't think consoles are known for being powerful. They probably use PPC so the games will be harder to emulate or something. It would be easy to make cheap x86 consoles, or even just plug a few USB joysticks into a computer, but apparently people only dev for Nintendo, Sony, and Microsoft, who aren't building such unholy chimeras.

During the design phase of these consoles, Intel was putting out P4's that would burn a hole in the plastic casings of the console, AMD was nowhere capable to deliver millions of athlons on schedule. Intel had even hit the speedwall, with no solution in sight. meanwhile, IBM was touting how their new PPC cores would smoke a P4, and how there manufacture process would not hit the speed wall for another couple of Ghz.


However, then intel started to think straight again, and upped the instructions per clock, instead of just the clock. now they have Core, and PPC is a mediocre cpu in comparision. Note that even a Core i7 doesn't run as much Ghz as the P4 did.
 
Interesting, indeed... I have no problem being corrected on said topics I touched on... I had not seen anywhere during my research of the XBox 360 that it was labeled a PPC machine. Of course, I also didn't know much about the other archs out there as well MIPS, SPARC, etc, so only knowing of PPC and x86, and NOT seeing it labeled as PPC automatically leads to the conclusion that it was x86. on that note, I'm surprized that original XBox emulation is still so bad, with it being an x86 system... if clock speeds mean anything, the Pandora should be able to handle it @ 733MHz, but x86->ARM could be an issue... I don't know enough about the ins and outs of all this to say for sure... :p I'm still learning. Of course, that won't stop me from voicing myself about what I "THINK" I know on the subject... :D helps me figure out when I'm wrong :p
 
Interesting, indeed... I have no problem being corrected on said topics I touched on... I had not seen anywhere during my research of the XBox 360 that it was labeled a PPC machine.

http://en.wikipedia.org/wiki/Xbox_360


"CPU 3.2 GHz PowerPC Tri-Core Xenon"


it takes about 10 seconds to check your facts

The secondary reason is that PPC is capable of outperforming x86

if the x86 in question is a 386 yes.


x86 has been outperforming PPC for quite some time now, its the reason Apple switched to x86, you could not get a PPC in a laptop that would perform decently without burning a whole through the casing or draining the batteries in 10 minutes.


another problem with RISC processors & 64bit today is that memory bandwidth is the main bottle neck, its the reason that ARM created the Thumb instruction set.


current processors cant get their instructions fast enough nor can fit enough in the cache to work efficiently,


x86 and ARM-Thumb has the advantage of having smaller instructions therefore fitting more inside the same-sized cache and reading less from memory for the same amount of work done.


RISC performed well when memory was at the same speed to half the processor speed, now memory is more than 10 times slower.


if you want a PPC system to perform decently, you need an insanely large data bus and large cache which is very expensive.


today's x86 processors are not even CISC processors anymore, they're specialized RISC cores interpreting the x86 instruction set with a minimalistic dynamic recompiler, all that complexity still pays off because of the size of x86 the instruction stream vs equivalent 32bit RISC. and 64bit RISC is much worst.


its not a question of processor design, today its only about memory bandwidth & cache size.
 
PPC vs x86 has nothing to do with performance considerations here. They're instruction set architectures, not CPU micro-architectures. The PPCs in none of the current consoles represent anywhere close to a high-end design like IBM's current Power series, or the Mac G5 of old, or even the G4. It's like saying that Atom reflects high-end desktop performance because it's x86. In fact, the PPC cores on PS3 and XBox360 are pretty similar to Atom in design (in-order, only dual issue, not very wide, relatively long pipeline), while the clock bumped Gecko CPU in Wii is more sophisticated but still slow and old (more like G3 level technology with better SIMD).


The reason PS3 and XBox360 are using PPC at the center of their control logic isn't because x86 wasn't good enough, it's because they don't really need a lot of power here at all - instead the power requirements have shifted in the same place they always have for consoles, to graphics and (more recently) to vector coprocessors. For the bits that run the game loops and higher level tasks, the concern is more on getting something cheap and simple to implement, and therefore we get simple PowerPC cores. It wouldn't have been much surprise if they were simple MIPS cores instead, although PowerPC is a nicer ISA than MIPS by far.

it takes about 10 seconds to check your facts

How many seconds does it take to second guess everything you write? Sometimes you're just going to think you know something that you don't and are going to have to deal with being wrong about it. For instance, several things said in your post are wrong:

if the x86 in question is a 386 yes.


x86 has been outperforming PPC for quite some time now, its the reason Apple switched to x86, you could not get a PPC in a laptop that would perform decently without burning a whole through the casing or draining the batteries in 10 minutes.

I'll ignore the ridiculous hyperbole behind "if it's a 386." Yes, Intel x86 was outperforming and providing better perf/Watt than the cores Apple was managing to get at the time, but IBM is still (to this day) producing high-end PPC that can easily give high-end x86 a good run, read up on:


http://en.wikipedia.org/wiki/POWER7


And note in the top500:


http://www.top500.org/list/2010/06/100


All of the "BlueGene" systems which are PPC based.

another problem with RISC processors & 64bit today is that memory bandwidth is the main bottle neck, its the reason that ARM created the Thumb instruction set.
current processors cant get their instructions fast enough nor can fit enough in the cache to work efficiently,


x86 and ARM-Thumb has the advantage of having smaller instructions therefore fitting more inside the same-sized cache and reading less from memory for the same amount of work done.

The main reason ARM created the Thumb instruction set is because ARM7 SoCs were being fitted with narrow 16-bit external interfaces and a 32-bit wide instruction set was killing performance with instructions fetched directly from external memory. With the move to 32-bit cache in ARM9 the weak Thumb ISA went all but unused. Thumb-2 continues to persist today for saving flash space in the embedded sector on Cortex-M series CPUs. In Cortex-A series it again goes all but unused, and in benchmarks shows practically the same performance despite having a lower icache footprint.


icaches continue to grow in size, and meanwhile for large threaded loads a lot of threads eventually end up sharing the same instruction streams. The fact is that code density isn't a huge issue anymore, and also, I've yet to see a 64-bit ISA that features larger code than a 32-bit predecessor (they all still use 32-bit instructions, typically). Another consideration is that although x86 had reasonable code density in the early 80s, these days the ISA design is very poorly optimized for use cases that matter, like SIMD, conditional execution, 64-bit, extended address modes, etc so it has lost quite a lot of any advantage it had in this space.

Stephane Hockenhull said:
RISC performed well when memory was at the same speed to half the processor speed, now memory is more than 10 times slower.
if you want a PPC system to perform decently, you need an insanely large data bus and large cache which is very expensive.

What needs very large caches to perform well are CPU designs with huge in-flight windows, see Pentium 4. Memory isn't more than 10 times slower, that's a misconception - in terms of bandwidth it's still hovering within a few times of CPU speed (when looking at 32-bit transactions per clock). The latency is way up there, but architectures do more and more to hide latency, including good hierarchical cache design and aggressive prefetch.


But basically you're saying that PPC is at an extreme disadvantage for having 32-bit instructions, right? Or because you think that it requires a lot more instructions to get the same sort of stuff done for being RISC? Because that too isn't really the case for usual compiler output.

Stephane Hockenhull said:
today's x86 processors are not even CISC processors anymore, they're specialized RISC cores interpreting the x86 instruction set with a minimalistic dynamic recompiler, all that complexity still pays off because of the size of x86 the instruction stream vs equivalent 32bit RISC. and 64bit RISC is much worst.

First of all, the only x86 designs that had "dynamic recompiler" elements were Pentium 4 with its trace cache and Nehalem with its loop buffers - an important part of being a dynarec is that the results of the recompilation are somehow persistent. Changing the representation temporarily in-pipeline flight does not constitute recompilation and probably most CPUs do this to some extent.


Second, whether or not a CPU is RISC or CISC has nothing to do with its internal representation of opcodes and is strictly defined by its ISA - I see this claim about "x86 is RISC now" pushed all the time, and this is not merely a semantic issue because the nature of the ISA heavily limits your computational expression at the front-end (despite the belief that CISC is more expressive than RISC, actually more registers and 3-address arithmetic is more expressive, not to mention lots of other things that have made it into more modern ISAs). Although the disadvantages are less pressing in an OoO design.


Third, you make it sound like the industry has stuck with x86 because of its code density, and has been willing to go through lots of hoops translating it internally because of this advantage. That of course is nonsense, the industry has stuck with x86 for legacy/backwards compatibility and that's it. Intel themselves have tried to move away from x86 in the past, and no one actually believes that code density outweighs the disadvantages of the ISA.


Again, 64-bit and 32-bit RISC almost always have the same instruction width. You use a little bit more dcache when storing 64-bit pointers instead of 32-bit ones (x86-64 is hit with this too), but it's not a huge issue.

Stephane Hockenhull said:
its not a question of processor design, today its only about memory bandwidth & cache size.

Ah, really? Let's have a case study:


Pentium-M: 1.6GHz, 400MHz FSB, 2x 32KB L1 cache, 512KB L2 cache (yes they exist, look for them), 64-bit external memory interface


Atom: 1.6GHz, 533MHz FSB, 24KB/32KB L1 cache, 512KB cache, 64-bit external memory interface


Both have similar cache hierarchies, prefetching capabilities, memory interfaces.. Pentium-M has a little more L1 dcache, Atom is paired with faster RAM.


Which do you think outperforms the other by 2x in typical benchmarks? Pentium-M is much faster by virtue of being out of order and wider, ie processor design, not memory bandwidth and cache size.
 
Last edited by a moderator:
One other thing >_>

It would be easy to make cheap x86 consoles, or even just plug a few USB joysticks into a computer, but apparently people only dev for Nintendo, Sony, and Microsoft, who aren't building such unholy chimeras.

Microsoft tried making such a "cheap" x86/PC hardware based console with the XBox, the net result was ending up some 8 billion in the hole after having to all but give them away to remain competitive. Analysts are wondering exactly how many new XBoxes there will have to be before MS recovers its losses on the original one.


Of course, things have changed a little since then, sort of. When Bobcat comes out things will be especially different.
 
While interesting ARMs moves needs to be recognized in the context of the greater RISC industry. I think IBM will be behind this as it continues their deny oxygen strategy verse Intel and ARM has a way to go before it intrudes on IBM territory.


The intrinsic advantage of RISC is you don't have all those extra transistors acting as an emulation layer like you have with x86 types. That means less power usage, in addition to more direct access meaning you should be able to optimize the code better if you put the effort in to do so. Hence why ARM is able to deny Intel market share in the mobile device market.


The intrinsic advantage of x86 types is you don't need the equivalent of qemu actively translating ISAs or to recompile software in order for existing software designed for the x86 ISA to run. Otherwise it's really as much a liability as anything else. With Global Foundries new developments Intel is going to have the RISC types chewing on it like never before and AMD indirectly reaping the profits while shaking their hold on the general use PC market with Fusion products.


Really though, for Business this give everybody a PC model is simply a waste of resources. Most of the time they should be using nothing more then a thin client, due to lack of need for anything more for executing the duties of their job and less headaches for the network admin on top of the considerable money savings.

It would be easy to make cheap x86 consoles
You're proposing what? AMD Llano processors? Intel is about the farthest thing from cost efficient possible.

I've been saying it for a few years now, if Intel don't move to ARM they are going to become irrelevant
Intel has EPIC, aka Itanium if x86 goes the way of the Dodo, which complicates that issue.
 
Last edited by a moderator:
Of course Intel isn't going to move to ARM. They deliberately moved away from ARM when they sold XScale to Marvell. Now Marvell is making several millions on ARM sales we largely know nothing about.


Itanium has long since stopped being an option for much of anything, too many issues to be viable or relevant. The server market is also moving to more leaner cores, and more away from the one ultra-behemoth core that Itanium is.


There's just way too little reason for x86 to go sour in the desktop/laptop sector purely on the grounds of being x86, the disadvantages it has are too small. Even for mobile the gap is narrowing.
 
/me future prediction:


TheBigBeigeBox will forever be an Intel machine with a MS sticker on the back. by 2015 only 1/4 people will have one of these, you only buy them if you know you need them (a utility). The IBM-clone will reach it's 100th birthday before dying out (never change a wining team).


Most people will do their thing on *good enough*/*mobile* machines (a fashion statement), ISA/OS compatibility will be such a mess 90% of users only use web hosted software.


Epic battle between ARM and low power x86 is guaranteed. Its going to be a cold war where none will win or lose.


Microsoft will have a serious dilemma: they either need to break compatibility by supporting ARM on Windows or bet everything on Windows Phone 7 and start again without a backlog of apps to support, keeping windows as the preferred IBM-clone OS.
 
Itanium has long since stopped being an option for much of anything, too many issues to be viable or relevant. The server market is also moving to more leaner cores, and more away from the one ultra-behemoth core that Itanium is.
You're trying to paint an overly nasty picture for a project that is still ongoing, making sales, and demonstrates high performance capability.

Seriously from a market perspective I'm not sure how you can defend that as an issue with EPIC, as you claim trends without underlying market realities driving those trends being accounted for. It is true just like IBM business machines it has scarcity of programmers related to its uniqueness, which combined with it not being the easiest thing to properly program for has helped to marginalize it.


But there's more to that story, and what Intel was hoping to accomplish with that still ongoing project, no?

There's just way too little reason for x86 to go sour in the desktop/laptop sector purely on the grounds of being x86, the disadvantages it has are too small.
It's entirely a question of momentum, and with the eventual Diamond/Graphene revolution there will be another window. Intel invested a lot of resourses into the EPIC ISA to try to get off x86 before, I wouldn't put it past them to make another go when shifting materials opens up the bandwidth so much. IBM of course is also looking at such materials and they have no reason not to give PowerPC another round.

Further you have the viral problem of increasingly powerful smart phones and cheap ARM long battery life laptops, that you seem to seriously underestimate. People like their iPhones, and the iPad is a runaway success. Neither uses x86. Heck with GF being backed by IBM, who will undoubtedly use their services and thus hug Intel's process shrinks, and Intel agitating Apple with this oust Nvidia nonsense, I wouldn't count of their loyalty to Intel.

Even for mobile the gap is narrowing.
I disagree. Thanks to Apple and otherwise, ARM's brand of the RISC ISA is gaining momentum in that sector. Customers are becoming accustomed to the related App Stores, with closed source programs compiled for ARM's brand of ISA. Changing to x86 in that sector is about as plausible as RISC taking over PC applications overnight and will only increasingly be so.
 
Last edited by a moderator:
You're trying to paint an overly nasty picture for a project that is still ongoing, making sales, and demonstrates high performance capability.

Seriously from a market perspective I'm not sure how you can defend that as an issue with EPIC, as you claim trends without underlying market realities driving those trends being accounted for. It is true just like IBM business machines it has scarcity of programmers related to its uniqueness, which combined with it not being the easiest thing to properly program for has helped to marginalize it.


But there's more to that story, and what Intel was hoping to accomplish with that still ongoing project, no?

I haven't claimed anything about the reasons for why Itanium has been relatively unsuccessful, just that it has been relatively unsuccessful and will continue to be relatively unsuccessful. The claim I did make is that the market will be moving more towards more simpler cores than behemoth cores like Itanium, especially when being so large comes at the vast expense of clock speed - no processor in the world will have enough IPC to be especially competitive per core at 1.73 GHz, which is where Itanium tops out in 2010. This is also "only" a 4 core part at 185W TDP... this isn't painting a good picture at all. Possibly this is all driven around it being a 65nm part, but for a company that has always been at the forefront of aggressive process innovation - and has transitioned many of its x86 products to 32nm in 2010 - being at 65nm in 2010 is pretty damning. And says a lot for what Intel's current position with Itanium is.


Intel not having axed it doesn't negate anything I said. We'll see how Poulson turns out in 2012, but we can't say much about something that far off.


I'm not really sure what your interest in Itanium is, but any expectation that it could replace a dead x86 is incredibly naive.... how exactly do you transition an architecture whose existence depends on being sold in machines for hundreds of thousands of dollars to a commodity desktop item?


Check out this quote from Intel:


'“For pure performance, you might go with Xeon processors, but the mission critical customers Itanium targets are most interested in reliability, serviceability and availability features across the operating system, processors and other aspects of their enterprise computing infrastructure. Processor performance is only one aspect of what interests them,” said Mr. Ward.'


Yeah, that really sounds like an architecture they can transition into the desktop. They're selling a software/hardware infrastructure platform and some generated sense of security here, that's pretty much it. The actual Itanium architecture part of that picture is astoundingly small.

It's entirely a question of momentum, and with the eventual Diamond/Graphene revolution there will be another window. Intel invested a lot of resourses into the EPIC ISA to try to get off x86 before, I wouldn't put it past them to make another go when shifting materials opens up the bandwidth so much. IBM of course is also looking at such materials and they have no reason not to give PowerPC another round.

Heh, you're making arguments based on revolutions that you say will happen. Right. I think that for process breakthroughs Itanium will be the last to benefit, when everything else has been on top of them much more diligently.


Investing a lot of resources into something doesn't mean that it was a good idea. VLSW is nice for a lot of applications (see C6x) but it isn't a perfect fit everywhere for a lot of problems - and trying to maintain backwards compatibility between arch revisions is suicide (and holds back development)

Further you have the viral problem of increasingly powerful smart phones and cheap ARM long battery life laptops, that you seem to seriously underestimate. People like their iPhones, and the iPad is a runaway success. Neither uses x86. Heck with GF being backed by IBM, who will undoubtedly use their services and thus hug Intel's process shrinks, and Intel agitating Apple with this oust Nvidia nonsense, I wouldn't count of their loyalty to Intel.

People like their TVs and toasters too, that doesn't mean they're threatening desktops and laptops. I don't know why you need iOS crap to see that ARM is popular in mobile, ARM has always been popular in mobile. All throughout this thread I've said that ARM will continue to dominate mobile, will rule tablets, may cut into netbooks (where Windows and by virtue x86 has proven important, btw), and may cut into servers. You WON'T see people impressed by the battery life it saves on laptops (that is, not netbooks - things with real > 13" screens and relatively normal sized harddrives and graphical capabilities that aren't relatively atrocious and what have you) because a watt here and there DOESN'T save you in battery life when a huge screen is guzzling several times that. That, and people will still want competitive single core performance in laptops, and people will still want to run the same apps on both. People will continue to use laptops as desktop replacements. This doesn't mean netbooks won't continue to eat into laptop sales, but let's get our terminology straight.


Intel is agitating someone by how they're treating nVidia? Intel is ousting nVidia? I think nVidia is ousting nVidia. Their days are looking numbered. Soon enough it won't matter how anyone treats them. Maybe if Tegra 2 managed to get off the ground by now I'd be saying something else, but it's starting to look bleak and soon it'll be too late.

I disagree. Thanks to Apple and otherwise, ARM's brand of the RISC ISA is gaining momentum in that sector. Customers are becoming accustomed to the related App Stores, with closed source programs compiled for ARM's brand of ISA. Changing to x86 in that sector is about as plausible as RISC taking over PC applications overnight and will only increasingly be so.

This has nothing to do with what I said, or at the very least nothing to do with what I meant which you may have misinterpreted: the power savings advantage that ARM enjoys over x86 is narrowing, and diminishes as you hit netbooks. Go look at the TDP for 800MHz Moorestown if you don't believe me. Otherwise, please read everything else I've wrote, so you don't waste any more time arguing with me on something I don't even think.


BTW, I think it's pretty silly in this day and age to actually refer to things as "RISC" and "CISC." A few differentiating features persist but for different RISC platforms these days there are an awful lot of dissimilarities.
 
Last edited by a moderator:
You're trying to paint an overly nasty picture for a project that is still ongoing, making sales, and demonstrates high performance capability.


Seriously from a market perspective I'm not sure how you can defend that as an issue with EPIC, as you claim trends without underlying market realities driving those trends being accounted for. It is true just like IBM business machines it has scarcity of programmers related to its uniqueness, which combined with it not being the easiest thing to properly program for has helped to marginalize it.


But there's more to that story, and what Intel was hoping to accomplish with that still ongoing project, no?
Pff, have you been in a server room lately ?


Itanium is a wonderfull CPU.... on paper :p It's last gereration of CPU didn't give me an edge over previous generation performance wise on my databases (2 jobs ago)...


For my current job, I'm architecting the migration to Oracle 11G for a pretty large organisation. I'm recommanding to switch to Intel clusters : The infrastructures costs are divided by 3 after all....


Last month, IBM outed a new server line which is about 7 times cheaper than anything they sell in this series ; still not cheap enough to beat x86 CPU cost....


At my previous job, we migrated all the parc from TRUE64 servers to.... Intel


Guess what will I do in my next contract ??? Migration from HPUX to Linux....


Every non-x86 CPU are getting out of the servers rooms : damn too expensive to run. (CPU price is not the only cost factor, licences are too : Suse on Z/VM : 12 000$ per CPU | Suse on x86 475$ per server...)
 
I haven't claimed anything about the reasons for why Itanium has been relatively unsuccessful


Exactly, I criticized you for doing so.


The claim I did make is that the market will be moving more towards more simpler cores than behemoth cores like Itanium, especially when being so large comes at the vast expense of clock speed - no processor in the world will have enough IPC to be especially competitive per core at 1.73 GHz, which is where Itanium tops out in 2010. This is also "only" a 4 core part at 185W TDP... this isn't painting a good picture at all. Possibly this is all driven around it being a 65nm part, but for a company that has always been at the forefront of aggressive process innovation - and has transitioned many of its x86 products to 32nm in 2010 - being at 65nm in 2010 is pretty damning. And says a lot for what Intel's current position with Itanium is.


And so your position is that of Taleb's Turkey then? It certainly seems no more sophisticated given how you present it.


The point raised was a theoretical given existing speculation in this thread of x86 phase out. The principle expressed was that Intel just has x86. This is false even if one does not humor simply ripping the x86 front end parts out and restructuring things. I love how your trying to turn it into some kind of syncophant advocacy with no basis beyond your apparent need to attack others.


I'm not really sure what your interest in Itanium is, but any expectation that it could replace a dead x86 is incredibly naive.... how exactly do you transition an architecture whose existence depends on being sold in machines for hundreds of thousands of dollars to a commodity desktop item?


The reality is that the original intent by Intel/HP was it would do so. Hence the whole business with making a version of Windows for it and all that. Not to mention Server parts are already transitioned to desktop equivalents, so I'm not sure why you think it's a radical concept to do so.


Heh, you're making arguments based on revolutions that you say will happen.


I say? The current diamond advocate is a joint Japanese and British endeavor, which will be aided by improving synthetic diamond technology. The biggest point its favor is diamond's high strength and supreme thermal conductivity. IBM, Intel, and DARPA are all looking at graphene quite seriously.


And again emphasizing a theoretical is treated as if it was an assertion of fact by you.


People like their TVs and toasters too, that doesn't mean they're threatening desktops and laptops.


People will continue to use laptops as desktop replacements. This doesn't mean netbooks won't continue to eat into laptop sales


I don't know why you need iOS crap to see that ARM is popular in mobile, ARM has always been popular in mobile. All throughout this thread I've said that ARM will continue to dominate mobile, will rule tablets


There's just way too little reason for x86 to go sour in the desktop/laptop sector purely on the grounds of being x86, the disadvantages it has are too small. Even for mobile the gap is narrowing.


You really don't seem to be able to make up your mind with where you're going with this. Frankly your defense seems to simply be a intellectual wall to seeing cascade events because you prefer the Taleb's Turkey approach.


(where Windows and by virtue x86 has proven important, btw)


What's the iPad but a netbook in tablet form? And if your pretentous attitude was so defensible, why are major manufacturers like Toshiba bringing smartbooks to the market?


What you mean to say is where Linux flopped, which shouldn't be a great surprise. Unfamiliar stuff with a lack of proper advertizing and a failure to emphasize Open Office can save in MS Office file formats, leading to a lack of customer support is surprising? Particularly when resume submission to job sites require standard MS Office file formats?


because a watt here and there DOESN'T save you in battery life when a huge screen is guzzling several times that.


Really now? A modern LED backlit desktop screen of 21.5" consumes 22 Watts if not taking advantage of power saving features. A laptop processors TDP tends to be north of that instead of south last I checked, and LED technology is still being improved.


Intel is agitating someone by how they're treating nVidia? Intel is ousting nVidia?


Really now? I bring up Apple and instead of acknowledging that you're response is this pointless and condescending attempt at begging the question. Yes, Intel with their recent CPUs modified them to prevent nVidia chipsets from working with them. AMD has already taken over the graphics contract as a result. Apple doesn't like other people screwing with their monopoly game, and as a result, as I said I do not see them being loyal to Intel.


Maybe if Tegra 2 managed to get off the ground by now I'd be saying something else, but it's starting to look bleak and soon it'll be too late.


You are a vindictive one aren't you? I'd think you'd be all over them given you seem to believe lots and lots of super small cores is the way of the future, aka their philosophy.


This has nothing to do with what I said, or at the very least nothing to do with what I meant which you may have misinterpreted: the power savings advantage that ARM enjoys over x86 is narrowing, and diminishes as you hit netbooks. Go look at the TDP for 800MHz Moorestown if you don't believe me.


Keep backpedaling. Whoops you tripped over an already raised argument it's at an intrinsic disadvantage all things being equal. You don't refer to the "mobile" market if you have nothing to say in relation to it.


BTW, I think it's pretty silly in this day and age to actually refer to things as "RISC" and "CISC."


Funny how you're the only one doing so. You will find no use of the term CISC in my posts beyond this very sentence.
 
For my current job, I'm architecting the migration to Oracle 11G for a pretty large organisation. I'm recommanding to switch to Intel clusters : The infrastructures costs are divided by 3 after all....
Last month, IBM outed a new server line which is about 7 times cheaper than anything they sell in this series ; still not cheap enough to beat x86 CPU cost....
Why Xeons instead of Opterons given your emphasis on price?

And really you seem to be simply advocating the move to server farms, which do have their limits.
 
Last edited by a moderator:
Someone seriously needs to fix the double post -> exceed quote block -> quote bug. :/

Exactly, I criticized you for doing so.

I don't think it really matters why they're unsuccessful, in determining their market viability and relevance today (which is what I was commenting towards); you could make the claim that their design is fundamentally great and external market factors have condemned them (I wouldn't make those claims), but even if those factors did go away there is undeniable damage that has been done.

And so your position is that of Taleb's Turkey then? It certainly seems no more sophisticated given how you present it.

Don't drop an unreferenced analogy and tell me "I'm no better", make an actual statement regarding me. I'm not going to look up a reference just to understand an insult you're levying on me.

The point raised was a theoretical given existing speculation in this thread of x86 phase out. The principle expressed was that Intel just has x86. This is false even if one does not humor simply ripping the x86 front end parts out and restructuring things. I love how your trying to turn it into some kind of syncophant advocacy with no basis beyond your apparent need to attack others.

This is not attacking you, this is attacking your faith in Itanium - for what it's worth, where I stand, your posts surely seem at least as hostile.


No one claimed Intel only has x86. It's just that any reasons the market would have for turning on x86 would surely apply several fold to Itanium.

The reality is that the original intent by Intel/HP was it would do so. Hence the whole business with making a version of Windows for it and all that. Not to mention Server parts are already transitioned to desktop equivalents, so I'm not sure why you think it's a radical concept to do so.

The original intent of Intel and HP is over 15 years old and quite difficult to reconcile with current reality (or reality a decade ago, at that). It seems to me that things have been working more in reverse, with desktop parts transitioning to the server space instead of the reverse. AMD either refining existing desktop designs for the server market or choosing to deploy there first (ie, Bulldozer) for a design for which they otherwise has every intention of pushing in the desktop space is a very, very different thing from what you're claiming. Wake me when Power7 and SPARC start showing up on the desktop too, instead of having their market share eaten more and more by ye olde x86.

I say? The current diamond advocate is a joint Japanese and British endeavor, which will be aided by improving synthetic diamond technology. The biggest point its favor is diamond's high strength and supreme thermal conductivity. IBM, Intel, and DARPA are all looking at graphene quite seriously.

Of course the technology will happen, the "revolution" that comes about is another question, and of course any implications it actually has on your arguments...

You really don't seem to be able to make up your mind with where you're going with this. Frankly your defense seems to simply be a intellectual wall to seeing cascade events because you prefer the Taleb's Turkey approach.

More of that from you. What exactly can't I make up my mind on here? Because I don't dispute ARM's dominance in the mobile space but do dispute a theoretical take down of the desktop space?? You see a contradiction in this?


(someone else post something now so I can post the rest without it merging and breaking :( )
 
Last edited by a moderator:
Most people will do their thing on *good enough*/*mobile* machines (a fashion statement), ISA/OS compatibility will be such a mess 90% of users only use web hosted software.
An interesting future prediction, but I'm honestly not sure on this part. :p Am I correct in reading the "web hosted software" part as referring to word processors and other such stuff (such as other business software and the like, perhaps?) which would usually be on a person's machine instead?


If so; I don't think most people really trust this sort of thing - at least in my experience. I can't account for anyone else, but I personally know nobody who does (and I certainly wouldn't myself).


Is there really a significant proportion of the population who would trust their information to web-based software (of the sort that would otherwise be on one's computer), that would make the "90%" part of the speculation possible? :blink:
 
Last edited by a moderator:
Thanks Prometheus XD


Part two to the response to Jebe:

What's the iPad but a netbook in tablet form? And if your pretentous attitude was so defensible, why are major manufacturers like Toshiba bringing smartbooks to the market?

Now you're calling me pretentious... How many personal insults do you need to level against me to prove I'm the one making a personal argument?


Netbooks aren't tablets. For one thing, tablets cost twice as much. People have a very different idea of how you use these two devices and what you use on them. So no, I don't agree that iPad is a netbook, I think form factor is everything here.


Of course there will be ARM netbooks, I certainly wouldn't deny this, least of all not after I said repeatedly that ARM does have a good chance of taking netbook share. But major manufacturers bringing smartbooks doesn't mean that that market has been handed to ARM - far more major manufacturers are sticking to x86 (and Windows) on the netbook front. And with more attractive options like Bobcat on the horizon much nearer than Eagle I don't see ARM challenging this.

What you mean to say is where Linux flopped, which shouldn't be a great surprise. Unfamiliar stuff with a lack of proper advertizing and a failure to emphasize Open Office can save in MS Office file formats, leading to a lack of customer support is surprising? Particularly when resume submission to job sites require standard MS Office file formats?

Yes, what I meant to say is Linux flopped on the netbook space. Likewise, Android would flop on the netbook space for all the same reasons. ARM doesn't run Windows and Windows applications.

Really now? A modern LED backlit desktop screen of 21.5" consumes 22 Watts if not taking advantage of power saving features. A laptop processors TDP tends to be north of that instead of south last I checked, and LED technology is still being improved.

And Moorestown is available in TDP < 1W, Bobcat 9W (that's with pretty decent GPU activity going on)... we were talking about x86 vs ARM as a factor in the laptop space, not "low power x86" vs "high power x86."

Really now? I bring up Apple and instead of acknowledging that you're response is this pointless and condescending attempt at begging the question. Yes, Intel with their recent CPUs modified them to prevent nVidia chipsets from working with them. AMD has already taken over the graphics contract as a result. Apple doesn't like other people screwing with their monopoly game, and as a result, as I said I do not see them being loyal to Intel.

I understand what Intel has levied against nVidia, and so long as it doesn't actually affect Apple directly I don't see them caring. I don't see them becoming paranoid or concerned over like treatment, and I definitely don't see them acting in nVidia's interests. Whether or not they do move away from Intel is neither here or there if they don't move away from x86 (the topic at hand), and I don't see that as an option for them.

You are a vindictive one aren't you? I'd think you'd be all over them given you seem to believe lots and lots of super small cores is the way of the future, aka their philosophy.

First of all, I don't think "lots of super small cores is the way of the future", I think that the direction towards smaller/more is what we'll be seeing in the server market. Second, what I think of Tegra 2, what I think of nVidia's ability to deploy the product, and what I think of vendor support of the product are three separate matters. Please don't confuse them.


Vindictive, against nVidia. Honestly. I'm losing track of all the things you're calling me.

Keep backpedaling. Whoops you tripped over an already raised argument it's at an intrinsic disadvantage all things being equal. You don't refer to the "mobile" market if you have nothing to say in relation to it.

Uh huh.. I said diminishes "as you hit netbooks." The expected battery capacities for this market are different from mobile space, and yes I differentiate the two - FURTHERMORE, my argument against x86 succeeding in the mobile space (which I haven't actually spoken that much to) has to do with more than aggregate power consumption of Atom vs ARM, or even the potential perf/Watt; ARM has a lot of the legacy/compatibility/familiarity advantages on that front that Intel would like to think x86 does.

Funny how you're the only one doing so. You will find no use of the term CISC in my posts beyond this very sentence.

You've used the term "RISC" several times in contrast to x86, so you're clearly referring to something RISC vs something non-RISC. Whether or not you actually formally call that "CISC" is only really of semantic interest. You're defining RISC as a united concept counter to x86, and I'm refuting that.
 
Last edited by a moderator:
Back
Top