Squidge
Certified Guru
How very observant of youicurafu said:You already said MW was wrong with 550MHz, so I would guess 500MHz just to be even.
Last edited by a moderator:
How very observant of youicurafu said:You already said MW was wrong with 550MHz, so I would guess 500MHz just to be even.
Stealth Bagel said:I'd love to see where I said anything about Xscale being "god's gift to the mobile world", I just mentioned it as an aside.. so yes, you have been starting an argument with me over nothing. And you know, telling people to calm down is pretty condescending, especially when you're the one picking at things...Exophase said:You're arguing with me when all I said is that I don't know why Intel abandoned XScale (the CPU...) if they were so interested in mobile.
Calm down, I'm not arguing with you.... I just said Xscale isn't god's gift to the mobile world and other ARM-based architectures seem to have the upper hand right now. Don't be so defensive. Are you looking for an argument?
javaJake said:atomicthumbs said:
Exophase said:Stealth Bagel said:I'd love to see where I said anything about Xscale being "god's gift to the mobile world", I just mentioned it as an aside.. so yes, you have been starting an argument with me over nothing. And you know, telling people to calm down is pretty condescending, especially when you're the one picking at things...Exophase said:You're arguing with me when all I said is that I don't know why Intel abandoned XScale (the CPU...) if they were so interested in mobile.
Calm down, I'm not arguing with you.... I just said Xscale isn't god's gift to the mobile world and other ARM-based architectures seem to have the upper hand right now. Don't be so defensive. Are you looking for an argument?
If I may jump inbetween you two, Ive always thought the Xscale didnt quite take off beacuse of PDA market never really took off? ARM at least has smaller devices phones/mp3, which may have led to its longevity?
I'd really like to point out that I don't actually care about XScale (which, by the way, is an ARM CPU) and was in fact just wondering why Intel decided to abandon ARM in favor of x86 for the handheld platform :\Pickle said:If I may jump inbetween you two, Ive always thought the Xscale didnt quite take off beacuse of PDA market never really took off? ARM at least has smaller devices phones/mp3, which may have led to its longevity?
Karel Jansens said:OS/2 kicked [NT's] butt on a PPro!christo930 said:From what I can recall (and granted I am getting older) the Pentium Pro was a disaster for Intel because Intel assumed that the world would be using pure 32 bit software by the time it was released, but the reality was that windows 9x had a TON of 16bit legacy code in it that would make the Pentium Pro run slower than a P1 at the same clock speed when running Windows 9x. Win NT on the other hand was faster when running on a P-Pro. So users running Windows NT (which were very few at the time) would benefit from the p-pro while everyone else would go slower.
I did a lot of support for OS/2 2 and 3 (but not 4, only even saw it once) and it was great for companies that had token ring networks, IBM mainframes and IBM's document imaging technology ( I forget what the hell it was called, but it allowed a user to have 2 monitors (on the pc end of it) and pull up an image (usually a scanned document) up on a full screen paper-white monitor that had a mainframe address (if I recall correctly just a session of the main address) and that system worked awesome (assuming you had a PS/2 running OS/2). Comm Manager worked great and integrated great, but was a bitch to troubleshoot. It would just stop working for no reason sometimes. And the built in REXX stuff allowed software pushes long before Novel or MS, you could even push a re-image). Desktop support calls were way down with OS/2, it worked much better than windows. One of the compaines I worked at who used OS/2 pas PECO Energy in Philadelphia and their desktop support calls trippled when they moved from OS/2 to WFW then quickly after that to Windows 95 and they lost the ability to pull up those documents on the full page screens. Their solution was to keep the call center using OS/2 (so they could pull up bills and other documents on the second screen) and give the business units the wfw and win95. IBM did eventually release software to display the documents on the pc's main monitor but you need a 21' monitor to use it (that is to see the image and the MF sessions at the same time). But as good as it was in certain respects, OS/2 was really slow. We used to call it slow-s/2. I really think it was piss-poor disk caching that made it so slow. If they would have fixed that they probably could have taken over the market. To make a change and reboot the 486-66 ps/2 (with a fast scsi hard disk,32 bit MCA, 16mb of ram (alot at the time)) took 15 minutes. It took multiple minutes lo launch just about any piece of software. Comm Manager itself took about 5 minutes to launch with an A session, B session and a P session (a virtual MF printer so you could print MF reports to your local printer). IBM also had kick ass printers. They blew away HP's. They were faster, had postscript, better and more paper handling... I can vision them in my head, but forget the model #'s. We had 3, an entry level, medium and faster. All were networkable if I remember correctly but only with token ring. Then there was LAN Manager which was the OS/2 file / print server. And any OS/2 machine could be a lotus Notes server (windows could too, but OS/2 was much better. Same machine 3x as many clients). Even into the 2000's Vanguard (the mutual fund people) were using OS/2 in their data center for Notes Server. You wouldn't believe what they did with OS/2 (at the desktop). Back then, if you knew OS/2 you couldn't go more than a few days unemployed. Now they just think you're old
Chris
christo930 said:From what I can recall (and granted I am getting older) the Pentium Pro was a disaster for Intel because Intel assumed that the world would be using pure 32 bit software by the time it was released, but the reality was that windows 9x had a TON of 16bit legacy code in it that would make the Pentium Pro run slower than a P1 at the same clock speed when running Windows 9x. Win NT on the other hand was faster when running on a P-Pro. So users running Windows NT (which were very few at the time) would benefit from the p-pro while everyone else would go slower.
OS/2 kicked [NT's] butt on a PPro!
Actually, Token Ring hurt OS/2 a lot as well (by hurting IBM). Laptops were the real problem. If you have an office building with multiple floors and some have 4mip rings and some have 16 mip rings and a guy with a laptop he can bring the who floor down if the port doesn't wrap fast enough. I've seen it happen dozens and dozens of people loose their work because some outside sales guy forgot to change his ringspeed. Not to mention, half the mainframe addresses get locked and need to be reset and so dozens of people call the helpdesk to their MF reset (or as they like to say "to get my green screen back"). It also meant that when a department moved, someone had to visit the desktop to change the ringspeed before the machine was plugged into the network to avoid taking the ring down. But even with this problem, it was still better than Windows. Compaq or some other VAR would come in and say how much better and cheaper everything would be and then support costs go through the roof. Thank god I didn't work the helpdesk, just desktop and network admin.
Could you please either stop pushing this argument that I never had any interest in or stop telling me to calm down, be less sensitive, etc. Preferably both :huh:Stealth Bagel said:You just sounded like you were really extollng the virtues of XScale there, no you never said 'god's gift to the handheld market', I was using hyperbole.... you're being much too sensitive. Honestly Xscale has SO many disadvantages in its current generations compared to competing 'pure' ARM devices like that S3C chipset series I mentioned from samsung that I think it's time Intel start fresh. I mean doesn't Xscale remind you of something; an architecture with very high clock scalability, but poor performance compared to lower-clocked processors, high power use and high heat output. Northwood/Prescott (netburst architecture)... Awful chips. When did Intel really get it right? when they built something new and very unusual from their Pentium M architecture, producing Core 2. I'm sure they could make something better if they put their minds to it. Maybe not better than a company like ARM that focuses exclusively on making high end mobile parts that operate with a TDP of < 0.1w, but who knows.
Isn't it awesome when people tell you to calm down when you're not even agitated? If there's one thing that annoys me, it's that. And THEN I get agitated. ^_^Exophase said:Could you please either stop pushing this argument that I never had any interest in or stop telling me to calm down, be less sensitive, etc. Preferably both :huh:
Exophase said:I'd really like to point out that I don't actually care about XScale (which, by the way, is an ARM CPU) and was in fact just wondering why Intel decided to abandon ARM in favor of x86 for the handheld platform :\Pickle said:If I may jump inbetween you two, Ive always thought the Xscale didnt quite take off beacuse of PDA market never really took off? ARM at least has smaller devices phones/mp3, which may have led to its longevity?
If I were to guess, I'd say Intel has noticed that there's some very significant research going on in the battery capacity field involving nano-tech; there have been two 10x battery life stories on Slashdot in the last month or so (or at least, one was 10x - today - and another was some other huge number; either 10 or 100x).
Given x86 has such a huge software library around for it (let alone that even non-geeks recognise "x86" as meaning "can probably run Windows software"), the main (only?) reason not to use it is that it's horribly energy inefficient. But if they can get it down to low enough power consumption that the next generation of batteries can power the devices for 5+ hours, even if those available now can't, then they should be able to put a dent in ARM's customer base.
That is, of course, just speculation. But it is definitely notable that battery capacity is increasing VERY fast at the moment, at least on the research front, and that high power usage is only an issue if there is insufficient power to run the chip for the lengths of time needed for good mobile applications...
well, the question is, if this is really needed. why should i need more than 5h battery life? There aren't many situations where i don't have a poweroutlet for 5h, and when i am going on a train for 10h, I can't imagine playing for the full 10h...PokeParadox said:Bah... the increased battery capacity would make the more efficient chips run for longer though...
That battery technology is several years away. In the mean time, Intel is focusing on a new x86 based architecture that is quite a bit simpler than the mainline CPUs they develop. While it is clearly done to save power, it will still use a lot more power than the best ARM CPUs, and Ars Technica speculates (and I agree with them) that it'll also offer less performance. So there are two major reasons not to push for ultra mobile x86, and for mobile platforms compatibility means less and less. In fact, it mainly gives you two things basically: proper Windows (which actually sucks on mobiles) and games (that also usually suck on mobiles due to poor controls). Furthermore, the library of mobile suited software is and has been growing in ARM's favor since the ISA has dominated mobile platforms for years.Tobriand said:If I were to guess, I'd say Intel has noticed that there's some very significant research going on in the battery capacity field involving nano-tech; there have been two 10x battery life stories on Slashdot in the last month or so (or at least, one was 10x - today - and another was some other huge number; either 10 or 100x).
Given x86 has such a huge software library around for it (let alone that even non-geeks recognise "x86" as meaning "can probably run Windows software"), the main (only?) reason not to use it is that it's horribly energy inefficient. But if they can get it down to low enough power consumption that the next generation of batteries can power the devices for 5+ hours, even if those available now can't, then they should be able to put a dent in ARM's customer base.
That is, of course, just speculation. But it is definitely notable that battery capacity is increasing VERY fast at the moment, at least on the research front, and that high power usage is only an issue if there is insufficient power to run the chip for the lengths of time needed for good mobile applications...
Magnulus said:Isn't it awesome when people tell you to calm down when you're not even agitated? If there's one thing that annoys me, it's that. And THEN I get agitated. ^_^Exophase said:Could you please either stop pushing this argument that I never had any interest in or stop telling me to calm down, be less sensitive, etc. Preferably both :huh:
Yes
EDIT: Oops, I read "several" years as "seven" years.
Could you post the GP2X ones too please?Squidge said:Notes: It's still not optimised yet for the omap, only for gp2x. fps counts taken each second. After ten seconds, average fps is outputted for the last 10 seconds.
Opinions?
GBA has a good order of magnitude more CPU power than anything else that's emulated on GP2X with the exception of PS1 and Jaguar, both of which are probably not that far off due to being gimped in the fast memory department.Stealth Bagel said:like SNES and GBA which are graphically very intensive without packing a lot of CPU power.... there's no reason SNES emuation couldn't run at 60 FPS with transparencies and other SNES PPU fanciness on the gp2x's 200 MHz main processor if it wasn't for the damn software graphics.