The Future of Pyra's CPU


I think only HDMI would be better on upper part, all the other, as they are not super fast (SD card interface used in Pyra is no that fast) can be combined in USB 3. Even could would be place for a M.2 drive on lower part, and that could speed up data even using a USB 3 for comms, compared to SD cards.
If there were an USB3 hub in the lower part and all interfaces like SD card readers, audio connections, modem, keyboard, external USB ports, all these need specific USB converters connected to the USB hub.
With the current design all these are directly connected to the interfaces of the OMAP5 (or future SoC - and all have multiple interfaces for exactly these interfaces).
So using USB3 to interface everything to a SoC adds chips and cost. And it needs space in the lower part for all this.
Yes, we can not change past. These design decision were also based on availability of components in ~2014 - I don't know if there was a cheap and easy to use USB3 hub chip available at all? And there was not even some USB-C standard.
But if you want to design a Pandora 3 or Pyra++, the fundamental architecture can be reconsidered again. But then it is a completely different product/device and not a new CPU board for the existing Pyra.
 
But if you want to design a Pandora 3 or Pyra++, the fundamental architecture can be reconsidered again. But then it is a completely different product/device and not a new CPU board for the existing Pyra.
I am thinking about my own personal computer design :) but it won't be a Pyra++ but other type of machine, but thanks for your proposal.
 
One thing that I like about C64 is that it assumes knowledge of the computer because there are few abstractions. I like how you can just poke and peek directly into memory. I recently even bought a C64 manual in a thrift store.
Correctly, even C64 "encourage" you to touch hardware even in BASIC, because C64 used old BASIC v2.0 without commands to use advanced graphics and sound capabilities, you had to touch hardware with POKE and PEEK commands from BASIC.

Commodore 128, being a very upgraded C64, had a much more advanced BASIC v7.0 in C128 mode, with all types of commands for graphic/sound, even with a single BSIC command (MOVSPR number, angle#velocity) you can set a hardware sprite to move smoothly and steadily around screen. But for me the best part in C128 was having a very good integrated machine language monitor (you enter on it with MONITOR command from BASIC), so you can directly program in assembly, inspect/modify memory/ML programs, etc. And being a simple assembler (without labels), imply when you enter a M/L command you see how it is assembled/translated to M/L immediately, and you learn from this.

Today on modern machines we are too far from hardware. And the worst part is that a lot of young people never touched lower levels and never will do, so for them it is like magic.

About C64/C128 manuals, get "C64 Programmer's Reference Guide" (or C128 version if you are using a real or emulated C128 in VICE), the bible of C64, because standard user manual is too basic. If you can't find it printed on shops or 2nd hand market, you can download it: https://archive.org/details/c64-programmer-ref

PD: I also own "Das Grosse Commodore 64 Buch" by Hecht Martin, in German. It seems impressive (at least printed it is very thick :D ) and this seems real C64 bible, but I haven't begin to learn/read German language (I think reading something interesting and pleasure to you is good to help with learning of a foreign language).
https://archive.org/details/Das_GroBe_Commodore_64_Buch_1989_Data_Becker_De
 
Last edited:
Yes, x86 ISA and the following one add all the new instructions to the main ISA using the extra bits to add commands. That complicates microprocessor design because it doesn't know beforehand how many bits to fetch; could be 8 for the 8086 ISA, could be 16 for 386, 32 for 486 or 64 for AMD64. It probably fetches the full 64 bits and if it turns out it didn't need to rolls back the program counter and only interprets the bits found it needs to, but that's a complicated operation to do every instruction fetch.
You are partially wrong and partially correct :)

ARM as being a RISC ISA, have all machine instructions with same length (32 bits in normal mode or 16 bits in secondary Thumb mode). x86, as being very old (in origin) CISC ISA, shows opposite way: instruction length vary from 1 to 4 bytes (i.e. 32 bits). And this makes decoding, executing, more expensive and least efficient.

In x86 world an old 8086 can fetch instructions of 1 byte, others of 2 bytes long... others 4 bytes long (32 bits), and that instructions may operate on bytes or words (2 bytes). On a 32 bit x86 (IA-32) CPU the same, but instructions can be even longer, and they can operate on byte, word or doubleword (32 bit). And in x86-64¹ even more...

But even without 32 and 64 bit extension, only with original 8086 design, it was yet a nightmare: instruction longitude and format vary a lot of (1, 2, 3 and 4 bytes long). But it saved memory in a time when it was expensive and relatively small. Even Intel was going to abandon x86 because they knew memory would be cheaper (in fact Moore's law came from Gordon Moore, co-founders of Intel), but its big next architecture, object oriented iAPX432, was a total commercial and performance failure. IBM chose x86 for its IBM PC (original IBM PC used 8088, a "downgraded 8086" with only 8 bit external data bus), saving Intel and creating the need to evolve x86 with more and more eyesore to a yet ugly ISA.

I hate x86 ISA, because I programmed for it (in assembly).

¹: Note x86-64 isn't called IA-64 as x86-32 is IA-32 (Intel Architecture-32). It is because IA-64 (Intel Architecture-64) is Intel Itanium, because Intel was thinking with 64-bit we would forget x86 and would go beyond to a new EPIC ISA. The history is that Itanium was not so good, and was late, and AMD went 64 bits with x86, so Intel needed to follow AMD and copied x86-64 from AMD. So yes, Intel wanted to forget x86 twice in his history, but they failed in both :D

ARM instead since the Thumb ISA was invented added a flag to the status register that tells the chip what mode to be in, so that the chip can just run with that mode until the next time supervisor mode is entered. I'd argue that it's more efficient to do everything in the biggest mode you can be in. Usually that's the case because it gives you access to bigger registers, but at least in the slightly artificial case where you have a small number and you want to add 2 to it, it's most efficient to do that in thumb mode if you only need 16-bits for your data.
When you don't need best performance, Thumb can be better because it saves memory (in storage and execution).
 
Last edited:
ARM as being a RISC ISA, have all machine instructions with same longitude (32 bits in normal mode or 16 bits in secondary Thumb mode). x86, as being very old (in origin) CISC ISA, shows opposite way: instruction length vary from 1 to 4 bytes (i.e. 32 bits). And this makes decoding, executing, more expensive and least efficient.

In x86 world an old 8086 can fetch instructions of 1 byte, others of 2 bytes long... others 4 bytes long (32 bits), and that instructions may operate on bytes or words (2 bytes). On a 32 bit x86 (IA-32) CPU the same, but instructions can be even longer, and they can operate on byte, word or doubleword (32 bit). And in x86-64¹ even more...
x86 opcode can be up to 15 bytes long. Opcode longer than 4-bytes are very common:
Code:
FF A3 88 03 00 00 jmp [ebx+0x388]
movzx eax, byte ptr [ebx+0x2BD8]
8D B3 D8 0D 00 00 lea esi, [ebx+0xDD8]
81 C3 A7 B9 10 00 add ebx, 0x10B9A7
65 A1 14 00 00 00 mov eax, gs:[0x00000014]
some random examples from 7z...
 
x86 opcode can be up to 15 bytes long. Opcode longer than 4-bytes are very common:
Code:
FF A3 88 03 00 00 jmp [ebx+0x388]
movzx eax, byte ptr [ebx+0x2BD8]
8D B3 D8 0D 00 00 lea esi, [ebx+0xDD8]
81 C3 A7 B9 10 00 add ebx, 0x10B9A7
65 A1 14 00 00 00 mov eax, gs:[0x00000014]
some random examples from 7z...

Yes, of course in x86-32 or x86-64 (64 bit x86 ISA) they can be as long as 15 bytes. In original 8086 (16 bit x86), the begin and original x86 ISA, I was taking about, instruction length only go from 1 to 4 bytes, so even in the original x86 16-bit ISA they vary a lot of (typical of a lot of CISC CPU), while on a RISC CPU they have the same length.

I wrote: «On a 32 bit x86 (IA-32) CPU the same, but instructions can be even longer».
 
Fascinating history! You guys should write a book on this, even if only for posterity (though I’m sure it would be useful to future generations, as it is to current ones).
 
Sophie Wilson, member of the royal society amongst other credits talked to a RISC OS user group meeting during these Covid times over Zoom, and the recording popped up on youtube yesterday. There's some mention of RISC OS topics and older Acorn bits of kit, but most of it should be of general interest if you're into CPU innards. It explains some of the complexities of producing smaller resolution lithography that I wasn't aware of before.

 
Hi all,

Do we have any news regarding the upgradable CPU ?

I for one wouldn't mind a SOC with a Mali GPU, at least this one has open source drivers:



RISC-V would also be cool but that would break retro-compatibility with the current OMAP5 and the OMAP3 that originally came with the Pandora, not sure this is what we want.

Cheers, Magic Sam
 
Back
Top