@exophase
when he said, (thanks AMD) I think he was being sarcastic for giving x86 an extended life lease.
@exophase
when he said, (thanks AMD) I think he was being sarcastic for giving x86 an extended life lease.
:mellow:
Well, no !
AMD pushed so far with their K6-K7 plus x86_64... intel were urged to respond. P4 failure, back to P3 into core architecture, etc... etc...
All good for consumers, thanks to AMD.
WinCE is a joke.
WP7 ? Where are WP7 users ? _Never_ saw one for real.
Wow, you seems to ignore wintel history : they made the market. They force users to buy their products, OS and office suite. Beeing a best seller, especially in a dictatorship form, doesn't means the product is better. In fact microsoft OS'se are probably the worst ever created.
AMD made the nowadays PC. If Intel were alone, your next PC will be a P4.
ARM kicks butts, whether it is 32, 64 or 128 ^^, but we will see soon.
Again, we'll see soon.
Err, wrong statement : PC sales are doing well. As consumer are forced to buy w7 with a PC... w7 sales are good B)
Doing a complex OS doesn't means doing a good OS. I don't care about a so-called complex OS.
And think about this : porting windows to another architecture... Okay, in ten years, maybe... But in ten years, nobody will remember microsoft, as for DEC now.
Linux supports ARM since a very long time : it's a no match. I don't need to argue/autopilot.
Where the heck did you read i hate kinect ?
Almost everyone knows WP7 is a failure but you... Even microsoft knows it...
Again, where did you read i judge a merit on stock activity ?
Microsoft is sinking : "Got the facts" B)
So, do ARM Cortex-A15 and ARM Cortex-M4 have the same exact instruction set? Or would programs have to either target one or the other? (Or both, I guess.)
It'd be nice if the Linux scheduler could bounce programs between CPU cores based on how much time they wanted. This hypothetical future Pandora could then start programs on the low-power M4s and only transfer them to the high-power cores when they became CPU-bound. Obviously that won't work if the cores have different instruction sets.
Microsoft's operating systems have routinely been more secure and capable then Macintosh's if prone to memory issues. That the combination of Linus' Law, corporate investment, and OSS Linux user modifying things to make it work the way they want with GNU has resulted in the dominating super computer/server/embedded OS is a statement on the strengths of that approach more then Microsoft's failure.Wow, you seems to ignore wintel history : they made the market. They force users to buy their products, OS and office suite. Beeing a best seller, especially in a dictatorship form, doesn't means the product is better. In fact microsoft OS'se are probably the worst ever created.
Vastly inferior?Yeah, right, it's AMD's CPUs - which as of now are vastly inferior in the high end - which has forced Intel to innovate so we should be thanking AMD and not Intel. And yet even though AMD has not been competitive since Core 2 came out we still saw Nehalem, Sandy Bridge, we'll still see Ivy Bridge, we'll still see node shrinks every ~2 years well ahead of everyone else in the industry.. but yeah, you keep believing that AMD is the only reason Intel left P4. Here's a thought - if Intel stops making better processors they can't sell the end user better processors.
It's really amazing that anyone can be so arbitrarily supportive of AMD and so arbitrarily against Intel when both companies are making x86 processors.
Vastly inferior?
A i3-530 Dual Core with Smart Cache rates about 3100 in 3DMark06 while a Athlon II X2 260 under my testing gets around 2900 with the clock moved from 200 to 225MHz and thus 3.6GHz. These being relevant as most things don't use more then two cores, unlike the synthetic benchmarks. A i7-920 gets around 5000 making it inferior per core given how 3dMark06 works, which isn't unexpected. In general we're talking about a roughly 10% difference in synthetic benchmarks for twice the cost or more for the Intel option. That's with better silicon thanks to SOI and high-k metal gate, a process advantage, and the gimmick that is Intel's SMT technology with its associated 10-15% performance add in the rare things with can take advantage of understrength fake cores instead of telling it to F-off. Given the better silicon and process should allow things to physically be closer and thus speed up, it's actually rather uninspiring.
If you're talking supercomputers Tianhe-1A is up front most of the FLOPs is from nVidia Tesla GPGPU Accelerators, and it's only doing 2.5 PetaFLOPs to Jaguars Opteron driven 1.75 which should pass Tianhe-1A's current rating in the next changeover.
If we're talking in general AMD has depopulated their clock circuit reducing waste heat dramatically and thus allowing higher clocks, integrated power gating, and switched to SOI high-k metal gate silicon to start with Llano and Bulldozer in April when the revised Intel Sandy Bridge chipsets are projected to hit market.
If the complete overhaul with this new approach to mhandling multiple threads and integration of Accelerators work out as planned Intel's not in a good position to respond anymore then they were when they lost the Gigahert or 64 bit race. A complete overhaul early in multicore has helped to carry them but the margin it bought them is slim now, and isn't going to help them verse Bobcat and Llano in the -book form factors that represent a surprisingly significant part of their sales.
I've thought that for a while also, when a Beagle Board can do all the basic computing tasks that most users would want to do with a few watts of power when an Atom uses multiple times that and at a slightly slower pace then something is up.
....
This is why Pandora2 needs a nice digital output, I'd use it as my main PC at home it if could output 1080p(the rez of my 24" monitor).
3dMark06 is considered a CPU benchmark last I checked, particularly as almost every reputable review site uses it like that. The particular tests for that involve rendering something entirely with CPU in a way that realizes the full benefit of multicore. It has a specific CPU Score independent of the others, which is what I'm using. Hence four instead of five digits.3DMark06 isn't a CPU benchmark, and your comparison is between two low end CPUs. This isn't a matter of perf/price, where you could probably construe an argument (ie, how AMD has survived), it's a question of who has a more efficient architecture.
SOI high-k equals silicon bulk now for current leakage and related issues effecting how close together circuits are? You're being a shill to fail to acknowledge that and act like 45nm was equivalent silicon. Your conclusion also deviate from even Anandtech who tend to attribute Intel's advantage in single thread more to Turbo then anything."Most things not using more than 2 cores" is itself a massive strike against AMD, because core count (6 vs 4) is the only advantage they have in the desktop space. In terms of IPC they're typically way behind, and they don't have some kind of peak frequency advantage like P4 did. Single threaded performance suffers greatly. It's not usually 10% per clock, it's closer to 30+%, and in real world situations more than benchmarks (indeed, you were the one who used a benchmark..). By the way, IPC has nothing to do with process advantage in this case... Intel had much better IPC when both were still on 45nm.
Which is why a lot of end users turn it off, right? Given your higher estimates for hyperthreading correspond to supposedly higher performance numbers in general I think we can leave it at that.Hyper threading isn't a gimmick - it often gives quite a bit more than 10-15% you claim (particularly on Atom, but that's another ball of wax) - but even if that were the extent of its improvements it's well worth the the transistor real estate which is fairly minimal. I don't know why you're calling them "fake cores", as Intel has never called them cores.
In which case you're nothing but a shill as you've demonstrated no innovation by Intel, or even why they're a better choice for most applications as related to investment, as any reasonable analysis should always include. Particularly given Intel's love of socket demarcation and high prices, and hence higher system upgrade costs. Tools are tools, status symbol bragging right nonsense has little place and the more expensive brand would always win that anyways as quality is not a factor there. This isn't the 1990s where the CPUs are so limited programmers are forcing people to need to upgrade to play new games. Indeed Crytek's new Cryengine 2 actually has lower system requirements then the original Crysis Engine from 2007.No I'm not talking about heavily throughput oriented tasks, that's a very different market. I was talking about Intel vs AMD in the desktop space and who is forcing whom to innovate, nothing else.
3dMark06 is considered a CPU benchmark last I checked, particularly as almost every reputable review site uses it like that. The particular tests for that involve rendering something entirely with CPU in a way that realizes the full benefit of multicore. It has a specific CPU Score independent of the others, which is what I'm using. Hence four instead of five digits.
Your injunction of it being low end is nonsensical as i3s have Smart Cache, Hyperthreading, all the other performance add features, and rate higher per core then their larger brethren.
They're limited in that they're dual core, not feature sets. This strikes a lie to your injunction about wanting to discuss architecture, as architecture has nothing to do with core count. Just like SOI high-k metal gates differentiated at the 45nm process, and thanks to their various benefits should allow things to be closer and has nothing to do with micro-arch and everything to do with silicon processing.
That you don't recognize the Dual verse Quad issue and associated lag in programmer acceptance doesn't help, particularly in a begging the question that treats it like some insane novel concept, instead of a well established area of discussion independent of manufacturer.
SOI high-k equals silicon bulk now for current leakage and related issues effecting how close together circuits are? You're being a shill to fail to acknowledge that and act like 45nm was equivalent silicon. Your conclusion also deviate from even Anandtech who tend to attribute Intel's advantage in single thread more to Turbo then anything.
Which is why a lot of end users turn it off, right? Given your higher estimates for hyperthreading correspond to supposedly higher performance numbers in general I think we can leave it at that.
In which case you're nothing but a shill as you've demonstrated no innovation by Intel, or even why they're a better choice for most applications as related to investment, as any reasonable analysis should always include. Tools are tools, status symbol bragging right nonsense has little place and the more expensive brand would always win that anyways as quality is not a factor there. This isn't the 1990s where the CPUs are so limited programmers are forcing people to need to upgrade to play new games. Indeed Crytek's new Cryengine 2 actually has lower system requirements then the original Crysis Engine from 2007.
Particularly given their love of socket demarcation. Innovation is development of new ideas and AMD despite being an ant next to Intel has remained the x86 innovator company. A multicore design overhaul in 2006 that finally gave Intel advocates something to point at why they pay a premium, and way too often less motherboard associated features, is not innovation. Accelerated Processing Units taking a page from the development of Personal Supercomputers to bring more power to the home user and another approach beyond AMD's SMP or IBM's SMT to the multithreading problem, are. Acting like Bulldozer is just SMP as you are is beyond daft.
I don't see Microsoft ceasing to exist even after the current move by many world governments to Linux and OSS, although they are probably going to shrink quite a bit. They became what they did to a large extent because they were good at iterative improvement verse an established competitor, ala their taking away Word Perfect's crown, and this guy who managed the Office 2007 and Windows 7 projects clearly has the makings to help them keep it up. There's also the issue of a lack of unified API in the Linux world and resistance to open source and associated issues with drivers related to kernels.I just can't see Microsoft coming back from this. Their phones have been huge failures. The man in the street is looking at iPads or Blackberries.
Based on the reception of Bobcat APUs I think the demand is ultimately more for a lightweight device with reasonable power, then supreme battery life. Apple kind of already is leading the way on that with their tablets, and Intel fussing with nVidia chipset wise probably isn't helping matters.When OMAP4/5 (etc.) laptops start to hit with 10-20 hour battery time the tide will turn even harder.
And I highly suspect Apple will lead the way with those Laptops, I bet more people ask if it can run Angry Birds than ask if it can run Windows.
When OMAP4/5 (etc.) laptops start to hit with 10-20 hour battery time the tide will turn even harder.
And I highly suspect Apple will lead the way with those Laptops, I bet more people ask if it can run Angry Birds than ask if it can run Windows.
In which case you don't know shit about the benchmark. A console in the first rating is 6-8k, my inexpensive box is 13k.Okay, I didn't know you were using the CPU score.
You really are an ignoramus about desktop space aren't you? The TDP of Dual Cores have never been half that of quad core and hence higher performance per core. It's not exactly complicated, and it's mentioned in any reasonable assessment of the two.Injunction..? You should.. probably look that word up.
If the lowest end of Intel's mainstream desktop x86 line rates higher than the higher end then there's a problem.
Projecting much? You're the one making a supreme issue of it, going after people, and derailing a thread so you can stand on a soapbox and try to act big while not having anything to back you up. And really what's with this tirade about K8 and its descendant as if they're a mistake? They developed the current standard, this whole ditching them dialogue reads like a Intel advocate tantrum over pent up frustration over Intel's ineffectual response prior to the Core overhaul. I suppose Pentium Pro was a mistake because it was eventually replaced?And yet everything I said about the K8 descendants falling behind Intel over the last few years. Athlon64 and AthlonX2 were several years ago now. Are you really that offended that Intel managed to move ahead of AMD? Something that AMD themselves are pretty blatantly admitting by ditching the architecture, the same way Intel blatantly admitted that Netburst was a dead end?
A certain lawsuit had a not so minor influence on what window they shot for, but by all means keep telling yourself that had no influence and that evil usurper AMD could never possibly compete with Intel again. When you're down to disputing a definition because you wanted to use the word a buzzword devoid of its meaning, you clearly don't have much. Innovation is intentionally divided from iterative development, hence mere iterative overhauls never have and never will classify as innovative.APUs are great, but they have nothing to do with CPU innovation, or at least not the type I'm talking about (and Intel released the first real desktop class IGP integrated on a CPU anyway... may well have been AMD's idea but they're taking too long to market with it, or do you deny that Llano is taking longer to come out than it was supposed to?). JUST THE SAME, this wasn't supposed to be about Bulldozer, Brazos, or even Llano, it's about Intel's ability to keep making better CPUs during an era where AMD stopped being the kind of threat they once were.
Again with pointless emotional nonsense.And if you think AMD was making Intel anxious with their desktop offerings over the last few years then I don't really know what to tell you.
The 2010 IDC report I assume you're referencing, given early 2010 reports are opposite, doesn't indicate losing "most of their server share." All it indicates is their sale of new servers is slightly down pending the release of new products during a quarter. Given how you characterize, and fail to reference this, thank you for lying.Likewise, AMD lost most of their server market for a reason.
So now SMP aka Symmetric Multiprocessor is SMT aka Simulatenous Multithreading? You're all mixed up, Jack.And I never once said Bulldozer is "just SMT" or anything even minutely resembling that and I am at a complete loss as to how you've interpreted such a thing. I've barely said anything about Bulldozer at all, in fact, aside from that it's more throughput/core optimized than latency optimized. Whose posts are you reading dude?
In which case you don't know shit about the benchmark. A console in the first rating is 6-8k, my inexpensive box is 13k.
You really are an ignoramus about desktop space aren't you? The TDP of Dual Cores have never been half that of quad core and hence higher performance per core. It's not exactly complicated, and it's mentioned in any reasonable assessment of the two.
Projecting much? You're the one making a supreme issue of it, going after people, and derailing a thread so you can stand on a soapbox and try to act big while not having anything to back you up. And really what's with this tirade about K8 and its descendant as if they're a mistake? They developed the current standard, this whole ditching them dialogue reads like a Intel advocate tantrum over pent up frustration over Intel's ineffectual response prior to the Core overhaul. I suppose Pentium Pro was a mistake because it was eventually replaced?
A certain lawsuit had a not so minor influence on what window they shot for, but by all means keep telling yourself that had no influence and that evil usurper AMD could never possibly compete with Intel again. When you're down to disputing a definition because you wanted to use the word a buzzword devoid of its meaning, you clearly don't have much. Innovation is intentionally divided from iterative development, hence mere iterative overhauls never have and never will classify as innovative.
Again with pointless emotional nonsense.
The 2010 IDC report I assume you're referencing, given early 2010 reports are opposite, doesn't indicate losing "most of their server share." All it indicates is their sale of new servers is slightly down pending the release of new products during a quarter. Given how you characterize, and fail to reference this, thank you for lying.
So now SMP aka Symmetric Multiprocessor is SMT aka Simulatenous Multithreading? You're all mixed up, Jack.
When OMAP4/5 (etc.) laptops start to hit with 10-20 hour battery time the tide will turn even harder.
OMAP5 laptops with 15+ hour battery life won't happen. At least not in something that the market will call a "laptop." The larger you scale in size the less of an advantage the SoC has and the more your screen
and SSD and fast and wide multi-channel memory takes up the overall power budget. They'll still be differentiated but not as much.
Right now that is 35 watt hour. But it's awfully gap filled. If they could just make the entire base a battery, you could probably pull off 70 watt hour. And def be in the 12-15 hour range on USE time.
I'm not technically inclined enough to address specifics, but I've seen the benchmarks, and I can offer a consumer's point of view. I think it's fairly incontrovertible that Intel's CPUs are superior to AMD's in terms of performance per clock at least at the high end. I'm not sure how AMD lost the massive advantage they had a few years ago (before Intel came up with the Core architecture), but I'd go with the i7 950 over the latest six-core AMD CPUs, especially since the 950 is quite a bit cheaper at Microcenter... and say, on the off chance that anyone else is reading, and given that I know how many technically inclined people do read these forums, does anyone have any insight to chime in with? Because these one on one bullying matches get a little lonely.