SoC: Back and forth!


Wasn't that too long to wait for 805 was for the products?
 Yes and no.

Yes: If there's no development board available right now and the chip will not be compatible to any existing one.

No: If there's a development board available OR the chip is mostly compatible in features (similar to the DM3730 and OMAP3), then we could use the Snapdragon 800 devboard to design the PCB and put in the 805 in for the real prototype run.

That's something we don't know yet, we need to wait for Qualcomm for more information.

The problem is, that the gpu is way weaker then the snap dragon. Around 30%. I am not sure what we want to do (emulation bottlenecks are most times the cpu not the gpu).
I think that changes.

Take a look at DreamCast, PSP and N64 for example:

Main bottleneck here is the GPU, not the CPU.

Same issue probably for GameCube emulation (Dolphin) and maybe PS2 (if that would even be possible).

More recent systems mostly use 3D graphics, which will be mostly handled with wrappers and a GPU.

(unless you want to emulate even more recent systems, but for those the CPU power would be way too low anyways).

For games like Doom3, etc. it's also mostly the GPU you need.
 
Not sure about driver version, I'm using the kernel version/Xorg version combination that does not result in Xorg being permanently pegged at 100% CPU usage.

It's especially common to crash in the PCB-editor when you zoom in/out.
I'm on a 3.13.2 kernel with 2.99.909 Intel driver and have just tried if I could crash it with zooming in the PCB editor. The PCB I have been trying it with, did not have a huge amount of components on it, but I could zoom in and out all the time, no problem whatsoever. No problem with CPU load either.

I think that if going the x86 route was utter nonsense, ED and the team would have discarded the idea very easily, either for technical or business reasons. If it's still under consideration, that's certainly because they are not narrow-minded, and try to have a pragmatic approach. As ED said, now it's all about testing and seeing which option is the best. Now the only constructive thing that can be done is to bring even more technical arguments, so an educated choice can be made.
Exactly. I couldn't have said it better.
 
Last edited by a moderator:
There is unconstructive spam here which is overdone imo.

No offence but then there is a definite hate for x86 here that seems more emotional than logical.

I'm sure Ed won't pick x86 anyways.

Omap 5 will be fine but if given the chance there is no harm in considering snapdragon or other possible candidates.

Perhaps one of these other socs will yield some unique benefits.

Lets wait and see and keep an open mind.
We don't hate x86 we hate losing notaz :(
 
That's funny how some people label all the ideas they don't like as "spam".
No matter whether you take the "opinions you dislike" or the usual definition of spam, this thread is full of it.And because no one is summarizing information (say, on the wiki for example) the discussion even when ignoring posts of certain people doesn't go anywhere.
 
What I'm a bit sceptical is that they received that development kit directly from Intel.

We know that Intel manipulated benchmark software before - so we can't say for sure this is all true.

(as mentioned: No real comparison has been released so far).

Also what's a bit of an issue:

"The big unknowns are things like video decode power efficiency, perf and quality of their ISP and idle power efficiency vs. Qualcomm."

Power Efficiency is one thing to care about.

If could be that it uses less power on max load, but how about power usage on min load?

That would reduce standby time and also battery time for lower-end applications (like NES, SNES, etc. emulators or MP3 players).

Most of the time the user will probably NOT need full CPU power.

There are a lot of unknowns on the Intel side, which is a risky thing.
 
To really remove those unknowns I guess ED would need to be able to buy a Snapdragon and a Z3770 dev board and really compare them. I really think it might be a good idea for ED to just ask Intel "Is there a Z3770 dev board which fits these and these requirements?" and if they say no, then we have one point less to discuss.
 
There are a lot of unknowns on the Intel side, which is a risky thing.
Why don't you try to get a development kit from Intel yourself?

About the power consumption, as there are real devices with that chip (like that Dell tablet mentioned before), it should at least be possible to get unmanipulated numbers for the power consumption/battery runtime. Dell claimed for their tablet a runtime of about 10 hours when running some web browser test. The battery they were using was a little larger than the Pandora battery, but not too much. Also the display was larger, so it should be fairly similar. It shouldn't be too hard to verify that claim.
 
I for one am glad he is sticking with TI and the OMAP5.

If I am correct this should mean all the current packages for Pandora will also run on the Pyra.  Backwards compatibility is an important thing with so many apps made, especially since many of the makers of the apps have moved on and won't be making updated versions.
 
Take a look at DreamCast, PSP and N64 for example:
For N64 the main problem is that the "gpu" of the N64 is a co processor which uses a lot of time, when you emulate it via shaders. The psp gpu problem is true. The bottleneck on the dreamcast is the cpu, AFAIK, at least ptibseb or raziel said it. It is a 200 mhz cpu and some instructions needs 7 or 8 instructions on the Pandora cpu.

I am not sure how the gpu vs cpu scales for other emulators. As far as I know Dolphins Main Problem is the cpu emulation, the gpu is a simple 12 year old radeon chip in the gamecube, 7 years in the Wii. Modern hardware can use the gx code quite efficiently, there are highres packs for dolphins, which show how well the games could look like, when there would have been used a better gpu in the Wii.
 
Last edited by a moderator:
Yes and no.

Yes: If there's no development board available right now and the chip will not be compatible to any existing one.

No: If there's a development board available OR the chip is mostly compatible in features (similar to the DM3730 and OMAP3), then we could use the Snapdragon 800 devboard to design the PCB and put in the 805 in for the real prototype run.

That's something we don't know yet, we need to wait for Qualcomm for more information.
I see. Was just about to ask if you could just use the 800 for now.
There are a lot of unknowns on the Intel side, which is a risky thing.
 
Last edited by a moderator:
I think both CPU and GPU are pretty fast on OMAP, Bay Trail and Snapdragon.

(Compared to Pandora)

For me Perf/Watt is more important.

(Careful, random example following. Not reality)

If Processor A is 10% faster at max speed but consumes significantly more energy than Processor B for the same thing I would use the slower one)
 
I don't, because it does not matter what the Android guys did for which reasons.
Ok, you're missing the point. Doesn't matter, I'll cut straight to it.The reason the Pandora is hardsoft is literally because that's the way it was done: OpenEmbedded (the system used to compile the kernel and most of the packages) built everything hardsoft by default, all existing binaries including executables and libraries were already hardsoft (and occasionally straight soft), everything up until a few years ago was hardsoft, the reason being because ARMv5 (which mostly didn't have fpu) was still quite prominent. I'm pretty sure even gcc defaulted to hardsoft but it was so long ago now that my memory could be wrong so I'm not going to say that it was just that I'm pretty sure it was and I'll try to find the details later.

2014 is a completely different world from 2007, criticising the team for following along with the market at the time is unfair.
 
The Snapdragon  800 Processor features Asynchronous Quad CPU cores with speeds up to 2.15GHz per core
Source: http://shop.intrinsyc.com/products/snapdragon-800-series-apq8074-based-dragonboard-development-kit-1

Isn't is supposed to be 2.3 GHz?

Is Qualcom also cheating?

The more research I do the less I believe what any numbers say.

EDIT:

I didn't see any heat sink on the DevBoard.

Does this mean it actually doesn't need one?

Would be an advantage over the OMAP then.

Testing is required.
 
Last edited by a moderator:
I think both CPU and GPU are pretty fast on OMAP, Bay Trail and Snapdragon. (Compared to Pandora) For me Perf/Watt is more important.
I agree, which is why you should carefully compare those SoCs regarding those matters.

2014 is a completely different world from 2007, criticising the team for following along with the market at the time is unfair.
I am not criticising the team for it and I am well aware that in 2007 ARM CPUs without any FPU were much more common, the Pandora still hasn't one of those. I was merely asking why that decision had been made. As it turned out, it actually was because of the graphics driver. So it doesn't even matter if OE used hardsoft as default. All I was saying is that it would have been possible to use hardfloat on the Pandora, even back then, if there weren't that driver issue.

Choosing to not use hardfloat is one thing, not being able to is another.
 
Last edited by a moderator:
I am just a middle man here.  Exophase mentioned Allwinner in the other thread and he wonders why Ed has not mentioned it :)

Let's discuss... :)
 
Last edited by a moderator:
^I thought he said he didn't want to deal with a Chinese company.
 
Back
Top