Getting closer...


I would love to see the Pyra use the Kirin 960 just like the HiKey 960. That is one of the few SoCs I've seen that excels at running the current Dolphin for ARM builds.

Well, maybe not excels... but seems at least playable to me...
 
Last edited:
@levi I didn't mean running any emulator this way, but a specialized one, that can and does rely on being the only process being executed (over all cores at any given time, it is being executed). Well, that thought wasn't formed that well, when I wrote that earlier. Does Linux provide a way to guarantee a process or even a group of processes to run and stop on the number of cores it wants to simultaniously (in terms of context switches)? Or do I first have to ask, if CPUs/SoCs do provide any basis for this?
 
Looking really good !
I've got my first car before the Pyra, so now I'm considering a road trip to Ingolstadt when the time finally comes. @EvilDragon, are you open about trading in person ? :)

'bout the i.MX8 M, sorry if I TL;DR'd the discussion and missed points made, but I suspect what matters most for games and emulators is constant performance, and a better thermal efficiency certainly would help in that matter, right ?
By the way, do GPU intensive processes overheat the OMAP ?
 
@levi I didn't mean running any emulator this way, but a specialized one, that can and does rely on being the only process being executed (over all cores at any given time, it is being executed). Well, that thought wasn't formed that well, when I wrote that earlier. Does Linux provide a way to guarantee a process or even a group of processes to run and stop on the number of cores it wants to simultaniously (in terms of context switches)? Or do I first have to ask, if CPUs/SoCs do provide any basis for this?
Yes, you can use 'taskset' (or various other tools) to create a task running with an affinity to a specific core, but that won't stop the OS interrupting you if it really needs to, so I'm not sure that gets you what you want.

Yes, I thought you meant you were thinking of a specialised emulator that could run on bare metal. Yes, you could do that but it would depend on the specific location in the memory map of your controller, your framebuffer, and your audio buffers, which would make it very specific to particular hardware; these values I think are the same between different units, but can and do change with a new revision of the hardware, so your audience would be very limited, so the point of doing it is more limited.

But yeah, writing code on the old 8-bit machines often had you interfacing with the keyboard and video memory mapped locations directly, and manufacturers had to retain those locations over different revisions of hardware, sometimes leading to some very odd arrangements. And even then the OSes would have been running periodic processes to service the sound buffers and to poll the keyboard inputs. All CPUs beyond very simplistic ones from the 1960s run timed interrupts that allow the OS to install routines to do general housekeeping of the peripherals. Without those running, you have to do a hell of a lot of threading work to ensure the sound system never runs out of sample data, and the controls are detected being pressed. It's not really something many people have done since the 1970s.
 
I suspect what matters most for games and emulators is constant performance, and a better thermal efficiency certainly would help in that matter, right ?
By the way, do GPU intensive processes overheat the OMAP ?

that is an interesting point. since the dmips/core performance is per MHz, if we can only keep the OMAP5 at 1.0 GHz with somewhere around 3.5 DMIPS/core/MHz, compared to A53 at 2.3 DMIPS/core/MHz at 1.5 GHz, we could get 3400 DMIPS/core for the i.mx 8M vs. 3500 DMIPS/core for the OMAP5. so they'd be roughly comparable at that stage, but the OMAP5 still wins out unless the i.mx 8M can get overclocked, which will increase power consumption (though not sure how it would compare to what the OMAP5 uses).

but the burst speed probably does help (going up to 1.7 GHz for the OMAP5 gives almost 6000 DMIPS/core, whereas the i.mx 8m probably doesn't get past 4000 DMIPS/core).

but it certainly wouldn't feel like an upgrade, except in the battery life (hopefully).

rough numbers of course...
 
Being tied to old kernels due to orphaned graphics blobs has never been fun.
This seems not to be carved in stone.
I recently have made the old omap3 blobs work on kernel 4.15-rc9 except some clock enable/disable issues and providing proper framebuffer addresses (just needs more efforts to locate the bug).
It looks as if there is no tight connection between kernel versions and graphics blob versions.
The reason (at least for omap + pvr/sgx) is a good architecture with clear separation of tasks:
* the kernel driver just provides infrastructure to access the gpu but plays no very active role once the access is established by shared memory and dma.
* the blobs have their own protocols to make the libs communicate with the microkernel and shaders - independently of the kernel version.
Well, this changes if the kernel wants to solve things that previously have been solved in user-space.
 
Does the Pyra even have (gaming ready) 3D acceleration yet (soon)? I mean, it's not an unimportant point (for me) and I would definitly not buy an device that has not usable HW acceleration. Or, at least not if it takes months after release to make usage of it. Unused HW features are so depressing, believe me, I had a Pandora right from the beginning...
Competition is just to strong with the GPD Win 2 on that side. And with their current chinese-speed I would not be surprised if the GPD Win 3 is out before ther Pyra. ;) But I see it positive, no worries, competition is good. :)
 
Concerning possible followup SOC's. If you can have open source GPU drivers, you want to pick an SOC that has them. PowerVR GPU's have good performance, but they're completely closed off and I don't expect that to change. The only three mobile GPU's I know that have an open source stack are Vivante, Adreno and VideoCore. Vivante is mainly found in i.MX, so that still brings us to the i.MX 8 QuadLite, which would be a sidegrade in performance but give better battery life and drivers. Second would be a Snapdragon SoC, since many are supported by the Freedreno driver and Qualcomm actually supports work on the driver. The other option is a Broadcom SoC. They've been reportedly been working on a VideoCore V to replace the ageing VideoCore IV (used by the RPI) and they have good open source support. There is reportedly already a VC5 chip out there, but it's very elusive and doesn't even show up on Broadcom's website.

Seems like i.MX or Snapdragon are the best choices. If there isn't a good choice now, we can always wait.
 
The only three mobile GPU's I know that have an open source stack are Vivante, Adreno and VideoCore.
You are forgetting Intel here. They have been using Intel iGPU for some time (the dark days of PowerVR in Intel SoCs are over) with open source drivers.
 
IMO, it's probably a little early to look for an upgtrade SOC. By the time the PYRA comes out plus a further 6 months seeing it in, you might want to consider an upgrade board then. And by then who knows what the options would be
 
When my Processors get 5 years old, I at least consider upgrading. Then, considering SoC options now, doesn't mean you'll get teased to buy an upgrade for xmas this year, does it?
I know, it's Linux and that one ain't that demanding, but I could easily get excited about an update, if it said full hw-support for x265 on the box. ;)
 
Concerning possible followup SOC's. If you can have open source GPU drivers, you want to pick an SOC that has them. PowerVR GPU's have good performance, but they're completely closed off and I don't expect that to change. The only three mobile GPU's I know that have an open source stack are Vivante, Adreno and VideoCore. Vivante is mainly found in i.MX, so that still brings us to the i.MX 8 QuadLite, which would be a sidegrade in performance but give better battery life and drivers. Second would be a Snapdragon SoC, since many are supported by the Freedreno driver and Qualcomm actually supports work on the driver. The other option is a Broadcom SoC. They've been reportedly been working on a VideoCore V to replace the ageing VideoCore IV (used by the RPI) and they have good open source support. There is reportedly already a VC5 chip out there, but it's very elusive and doesn't even show up on Broadcom's website.

Seems like i.MX or Snapdragon are the best choices. If there isn't a good choice now, we can always wait.

I personaly do not need an open source GPU driver, they are usualy much slower than the real ones (corret me if I'm wrong) and I want to have the best possible performance. I'm sure the Pyra will also run great with an closed source GPU driver and the average user has no real advantage in using an open source alternative.
 
I personaly do not need an open source GPU driver, they are usualy much slower than the real ones (corret me if I'm wrong)
If the manufacturer were to provide good information on the hardware, there is no reason why open source drivers coulnd't reach the same performance proprietary drivers do.
Methods that must rely on poking the hw somewhat blindly and listening to how it squeeks are just that inefficient, that you know open source drivers to be slower. Of course, man hours are to be put in, too.
 
If I remember right, Qualcomm turned ED down.

I'm not in the mood to look it up right now, But I think Qualcomm bought NXP (and therefore, i.MX* SOCs now belong to Qualcomm)?

I'm all for free drivers, so I wellcome i.MX (and have good experience with i.MX6, writing this on one of them) but I'm not sure I'm ready to upgrade so soon after first version.
In the off chance that they want to skip OMAP5 and ship i.MX8 as first product I'd be happy, but I think that makes no sense, and they now have to concentrate in
getting the current OMAP5 version as ready as they can, and there'll be time to look at upgrades later. Myself I can't justify throwing electronics to waste every 2 or
3 years, so if the upgrade design starts immediately after shipping the Pyra I may skip one upgrade whatever is in it.

I know it's not at all easy and besides the point at this stage, but I'd love for wifi to require no blobs. Beside no options in the market, legislators are getting dumber
so free Wifi may be outlawed at any point. So I'll probably use an usb wifi dongle and miss any improvement in RF in the mainboard.

Good luck team and take care of yourselves.
 
I personaly do not need an open source GPU driver, they are usualy much slower than the real ones (corret me if I'm wrong) and I want to have the best possible performance. I'm sure the Pyra will also run great with an closed source GPU driver and the average user has no real advantage in using an open source alternative.
The Etnaviv driver is faster then Vivante's proprietary driver as far as I know. Freedrone is quite competetive. AMD's open source drivers used to be terrible in performance, but they put some good elbow grease into them and they've overtaken the performance of their proprietary driver on linux and have come close to performance parity with their Windows driver in just a few years. The R600 driver for pre-GCN AMD GPU's has even taken many of their older GPU's up to OpenGL 4.5 while TeraScale 3 (right before GCN) on their proprietary driver only went up to OpenGL 4.2 (even on Windows).

In short, it's a matter of effort and support. Open Source drivers usually leads to more features down the line, especially if a company is known to quickly drop their driver support. Performance is important, but if you can have it open source, have it open instead of not.
 
My wishful thinking is still in for a Ryzen-based SOC to appear out of nowhere... that would be nice at least.
Ahem, this also sounds a bit far fetched, but Nvidia was refused since they wanted the hardware designed and produced by their companies... what if they do that with just the CPU board? Still unlikely to happen, so wishful thinking again.

Anyways, I wouldn't bother with another CPU board unless it can really be called upgrade.
 
My dream CPU board will be able to run dolphin.

That needs:
- 64-bit arm (the 32bit jit was removed as it was a maintenance and performance nightmare)
- wide frontend (the jit hates the narrow/in order cores like A53 and A55)
- Decently fast dual core CPU (a57 at ~2ghz sustained can just about play some games. An A72 at 2.5 should be enough. An A73 has no advantage over an A72. No idea about A75, but looks promising from the specs. The samsung M3 specs look awsome, but not even released yet, and will probably cost $$$ if you can even get samsung to talk to you...)
-- No need for more than 2 cores, dolphin /really/ kills one core with the jit, a second core can help with the gpu thread and audio dsp, but often the jit thread is the bottleneck anyway
-- This needs to be sustained - many last-gen devices (like the SD820) could run many things.. for a few minutes :p
- GLES3 capable GPU (gpu power isn't quite as important, as normally driver quality and overhead is a bigger deal than "raw power")
-- Vulkan is a plus, but for most current drivers it's pretty much useless on mobile as half the features don't work or have breaking bugs.

I know this is a bit of a reach, as this level of performance only entered the high-end devices in the last couple of generations, but hopefully in a year or two it'll filter down into more reasonable costs/power envelopes. Hopefully not too long, I was playing around on an mt8176 that ticks most of these boxes ~2 years ago...
 
i'm fine with slightly lower single-core performance as long as i get twice as long battery life ;).

I assume if it's using half the power it's also producing half the heat, which would be nice. I'd consider a significantly more efficient, if slightly less performant, SOC to be an "upgrade."

(DISCLAIMER: I know shockingly little about these things and am probably completely wrong.)
 
Back
Top