Does the Pyra have something similar to IME (Intel Management Engine)?


If you mean Intel Management Engine, then yes, the Pyra does not have that being based on ARM chips from TI rather than Intel chips. It does have Trustzone which is an separate address space and cpu circuitry which is not meant to leak anything to the main OS, but I guess something early in the boot would have to set that up, and most of that is open source and cross platform, therefore unlikely to depend on anything as chip specific as trustzone.

If you mean Input Method Editor then the Pyra has lots of support for European languages built in to it's keyboard, and a compose key as a perhaps more complete alternative. And if you want to type something weird like chinese, you can always look up the unicode number then in linux press ctrl+shft+u then type your hex number then hit space to make that appear. It's not useful for actually typing out chinese texts using that way of working, but if you need the odd symbol it can be useful. As far as I understand it most chinese input methods actually type pinyin and then choose the right option, which would require some more software.
 
I looked into it and luckily TrustZone does not seem to be as dangerous as IME is. From what I understand IME executes unverifiable code, that cannot be detected by the OS, at the highest permission level. It could hide a backdoor that can be used to do anything with the computer. Trustzone on the other hand seems to depend on code that the user can verify for it's operation.

Lately I'm getting a bit worried about IME because it seems pretty unsafe. Apparently AMD has something similar that's just as dangerous as IME is. Apparently AMD uses TrustZone on a separate processing unit to run unverifiable code. For these reasons I think that it's good for the Pyra to not use processing units from Intel and AMD that contains these dangerous features.
 
Last time I went digging, I was only able to find very hand wavey stuff about Trustzone from ARM. But yeah, as far as I was able to get an angle on it, it only runs stuff some user of the system has implemented first; it's not running code from the CPU manufacturer, with all of the resultant hard coded holes that entails.

And yes, AMD has a thing called 'Secure Technology' which is actually an ARM core baked into the silicon, running a closed source loop of some code or other.
 
IME / Trustzone are so-called "Secure Elements". (EDIT: Looks like IME goes completely off-rails with the concept)

The idea is, for example, that there are malware that can take control over your system and intercept pretty much anything that your system does.
So once this happens, there's no way to check if the system is compromised, since the malware can intercept that check and make it say that everything is working fine.

IME Trustzone and the likes offer a separate computing space that has very limited functionality (it mostly does encryption, signature and stuff like this) but is supposed to be much more secure.
So the malware that took over the system is unable to take over the secure element. The system can then ask the secure element to check if the system is compromised and the malware will not be able to intercept it.
(Of course it's a bit more complicated than that, but that's the general idea)

It leads to the problem that the secure element becomes a single point of failure: all security is moved there so if it fails, everything fails. But most of the time, it's still better than having no secure element at all.
Then there's the problem that many of the software that runs in the secure element are closed-source. So you've got something in your device that has complete control over your system's security and you don't know what's in there. If the manufacturer or the government wants to install a backdoor, it's the perfect place to do it.

These mechanisms weren't designed for backdoors. There's a clear need for them and they've shown themselves to be quite useful.
Sure, it's a great place to put a backdoor but if it's not there it can be anywhere else in the supply chain. A chip with a secure software can be compromised by a small hardware change introduced in the factory. The shipping process can be intercepted by an actor who'll swap a secure device with an unsecure one, etc. There are many actors in the supply-chains, and many points where it can be exploited.

So at some point you have to decide to either trust the actors or not use any device (or use them while consdering them to be compromised).
 
Last edited:
@ElPoco Let's say someone's running Gentoo and does individually patch a lot of the software, they're using. How's IME to determine, if the system is legit? (That examples is a surrogate for "there are a lot of Linux distributions out there, BSDs, Minix, Plan9, ReactOS, Haiku, etc ...".)
 
There's no way to tell a user-modified system image from a malicious-third-party modified system image.
So if you run your own patched OS (or generally speaking if you run an OS from a supplier who can't sign his images with a certificate that's in the secure element (or in the bootloader that's itself validated by the secure element, it's usually a chain-of-trust where each boot step validates the next one)) the secure element won't be able to validate your OS.

I don't know how it's handled on desktop. On Android, you'll have a message showing at boot time telling you that you're running a modified system.
 
So at some point you have to decide to either trust the actors or not use any device (or use them while consdering them to be compromised).
So how could ARM hide a backdoor in their CPU? TrustZone in itself seems to not be suitable for that. I still want to have as little trust as possible and in as few actors as possible.
 
ARM don't make CPUs. They just design the specs . I guess they could try to sneak in some backdoor-enabling designs.
Then there's the company that makes the actual CPU, that could also have a backdoor in the hardware, or in the software that comes with it (the drivers, and all the lower-level software that links the hardware to the OS).
Then there's the company that builds the board that will house the CPU, and all the companies that build the components that are added to that board. Each of them could have some backdoor, either at the hardware level or the software level.
Then there's the company that brings it all together in a device. (And all the companies that supply third-party software to them).

Then there's any point in the chain where any of the parts is sent to another place, where someone could intercept the hardware and tweak it. (This is also true to some extent for the software)

Oh and the backdoor might come from anyone. It can be designed purposefully by the company (on orders or without them), but it can be an employee who wants to be able to access his girlfriend's device, or who was asked to do it by a criminal organization or by a nation-state. It can be "hacked-in" by someone from a nation-state or a criminal organization who got access to the specification files or to the source code of some of the software...

But let's be honest, many times the backdoors are unintentional. It's either a "shortcut" that someone put during development and forgot to remove, poor security design or just a mistake that left an exploitable hole.
 
ARM don't make CPUs. They just design the specs . I guess they could try to sneak in some backdoor-enabling designs.
Then there's the company that makes the actual CPU, that could also have a backdoor in the hardware, or in the software that comes with it (the drivers, and all the lower-level software that links the hardware to the OS).
Then there's the company that builds the board that will house the CPU, and all the companies that build the components that are added to that board. Each of them could have some backdoor, either at the hardware level or the software level.
Then there's the company that brings it all together in a device. (And all the companies that supply third-party software to them).

Then there's any point in the chain where any of the parts is sent to another place, where someone could intercept the hardware and tweak it. (This is also true to some extent for the software)
Would this be as easy to implement and hide as malicious code in the IME would be? Or asked another way: If Intel was malicious and wanted a secret backdoor in our computers, would they use the IME or one of the methods you mentioned?
 
Last edited:
Hiding a secret backdoor is so easy that people do it all the time without realizing it. I'm not an expert on CPU backdoors (or even on backdoors) but I don't think Intel would need to do it in the IME. I even think it would be a bad idea for them to put a backdoor in a part that's supposed to be secure and ensure the security of the whole computer, a part that hackers will be more likely to target.

As for how I'd put a backdoor if I were Intel, once again I'm no backdoor expert but I guess it'd depend on what the purpose of the backdoor is. Is it something to target specific users or something that would allow me to control all computers? What kind of access do I need? Etc.
 
So then basically the state of security in computing is so poor that we might as well grant unverifiable binary blobs full access to our computer without any restrictions or supervision at all. That's as if burglary became so easy that people stopped locking their doors because they could not stop the burglars anyway.
 
Security is, generally speaking, getting better. These Secure Elements things, even if they come with their own set of problems, are generally a move in the right direction (especially for the "basic" end-user).
But security is an arms race between the attacker and the defender and currently, the attacker has the advantage. We have some pretty strong defenses, like encryption that is theoretically unbreakable, but we have also some big problems like software bugs that are common, due to a culture that tends to favor time to market too much compared to security. But there again, there are moves in the right direction. I'm currently working in the medical field and there are many restrictions and regulations that forces you to handle safety and security seriously.

But security is also relative. It depends on the threat you're defending against. Keeping your partner from reading your messages is different from keeping criminal organizations from stealing your credit card number and is also different from keeping the NSA from tracking you (which is also different if you're on their watchlist or not).

An exploit that gives you full access to someone's phone cost at least one million of dollars. This means that they exist, but that they're not that common. To take the buglar analogy, it means that phones security is good enough that a burglar can't get in as long as you lock the door properly (set up a password) and don't let anyone in (install shady applications). But just like in real life, anyone with enough time, money and/or determination will always be able to find a way in.

However I'd say that the devices aren't the weakest link. There are many services that will have their backups full of personal data on some Amazon S3 service that are publicly accessible because the person who set it up didn't realize that it was public by default. And also many services that will gladly sell "anonymized" advertising data that will give you enough data to attach a name to most of the account and get some juicy bit of information about them.
 
Last edited:
Even TI burn a little software into their silicon, although it's main job is as a bootloader, so it's the first thing that runs which mounts the eMMC and runs things off it ending in starting uboot where things are more configurable for us at least. It may well be possible to check that those don't leave things running in a TSR manner, although if they populate the trustzone I'm not sure how we'd be able to check that.

But if you imagine the trustzone was set up with code to encyrpt a message you pass to it, to check that you're not compromised, you could encrypt a set text using that, and also encrypt it using pure linux software, then compare the results. You'd have to ensure that the bad agent wasn't detecting what you were doing and subverting that test though, which makes it a little more complicated though.
 
I think that it does not have IME but I'm wondering if the Pyra has something similar to IME.

No IME, and that is a good point. But we have binary blobs in inner level, for example for GPU, and we don't know if that blobs have backdoor or bugs (wanted or unwanted by design) which can grant access to our computer.

Of course cell phone SOCs are much worse: for example they integrate cell modem inside SOC and that is running its own OS in same SOC with inner level, so they can do anything from it (and as cell connected, they can inject code whenever they want).

In Librem 5 they used a i.MX8M ARM Cortex-A53 SOC because they have open drivers for GPU/etc. The good point, by the way, is that Cortex-A53 is Spectre-free :)

ARM is better than Intel, but it isn't ideal. For example, Raspberry Pi is incredible bad: its SOC was designed for video, and ARM CPUs were only assistant items, no main items. When RPi starts a custom OS load 1st for GPU, and only then it passes control to ARM CPU; That custom OS is always running on RPi, and we don't know what it can do or what flaws in can have letting access to our system (like IME).

I suppose in RISC-V we well have same CPU without hidden parts, but only a very small part of them will be in that way, for example some of Si-Five, while most of them will be closed.

Si-Five have a RISC-V SOC capable of running Linux and having 4 64-bit cores and a 64-bit simpler control core all of it open (this extra core would be like "the other hidden core in IME" but all open). The problem with it is 1/ It isn't ready for mass market now (most Linux packets work, but still there is work to do); 2/ No GPU, so you get a SOC but you need a external GPU for some uses; 3/ Low in-order performance, but this can be a plus because there is no Spectre bug, while in Pyra we get Spectre because of Cortex-A15 (same as in RPi 4, while older RPi doesn't suffer Spectre).

Really I wait for a Linux capable RISC-V RPi-like board, even if it doesn't include GPU and its performance is like Cortex-A53 :)
 
Last edited:
IME, Trustzone and the likes offer a separate computing space that has very limited functionality (it mostly does encryption, signature and stuff like this) but is supposed to be much more secure.

...

These mechanisms weren't designed for backdoors. There's a clear need for them and they've shown themselves to be quite useful.
Sure, it's a great place to put a backdoor but if it's not there it can be anywhere else in the supply chain

1/ It is a secure element, of course, but also a backdoor. IME has its own CPU and its own hidden OS, and access to all computer resources. There is no doubt it can communicate with Intel servers when they want, as this is part of secure for example to update buggy OS, but in this same way they can do what they want whenever they want or are said to do.

In other words: it is worse than giving your home keys to someone, because IME is hidden inside your system, it can access all, and you can't do nothing to prevent it. It is like someone living inside your home, but in secret room you can not visit, and with access to all your home without your permission, and where you can't expel him from your house, never. Too much to be good.

2/ That CPU+OS will have bugs and you can't do nothing about them. You can't do it better or more secure, you can't disable it. Of course, if some hacker get access to IME, it could be we never know, because it is too much powerful to make it public, as they could do what ever they want with all connected PC in world.

3/ You are paying for a hard you really can't control. That is shame. Others has total control over your own computer although you paid YOUR computer.
 
Last edited:
Back
Top