Post
Remote status
Context
5@lxo The nature of side channel attacks is that CPU-imposed barriers are no longer as strict as they should be. This means that hypervisor boundaries are porous - it's possible to extract material from other virtual machines. I understand the perspective that simply avoiding executing any untrusted software removes that risk, but I do not control the software running in other VMs on the same hardware. How do you solve that?
@lxo so I can't run a service that allows others to run software of their choosing on my hardware?
@lxo Real world evidence suggests that from a practical perspective and with appropriate mitigations, the boundaries are solid. But if we're assuming that the security boundaries are porous, why do we bother with the security boundaries at all? Why not allow ptrace() to read memory across user boundaries?
Replies
11the way I see them, the boundaries are most useful to avoid accidental data corruption and leakage. the information and tactical asymmetries and the general quality of software make it so that an intelligent and resourceful opponent who gains some access can likely find ways to escalate that access, and presuming otherwise is likely self-defeating.
@lxo Do they fix them for good? I doubt it - any more than I doubt any security update fixes all bugs of that category. But they do fix the vulnerabilities that are publicly known and could otherwise be trivially exploited, and the fact that we haven't seen them exploited in the wild despite the relatively low cost and potential high gain strongly suggests that they work well enough.
Could a state-sponsored attacker still win? Possibly! But that's the all-or-nothing argument again.
@lxo What you're doing here is protecting against a theoretical attack (Intel providing backdoored microcode updates) and leaving yourself open to a known attack (sidechannel data exfiltration). You may well have a use case where that's not a concern to you - you may be the only user on your system, there may be no secret data on the system, that kind of thing - but that's not everyone's case, and people should be able to make an informed choice about that.
@lxo deployment of a back door via CPU microcode update is a theoretical event. Some people will have that in their threat model, and will want to avoid those updates as a result. Absolutely legitimate choice. As you say, those people should also be ensuring every other avenue of untrustworthy software in their system is closed off. But that's not everyone, and that's not a policy decision that should be imposed without ensuring people understand the tradeoffs.
I get it that you don't consider that a threat for your freedom or your security, and so you wish to overlook it.
@lxo I think we're using inconsistent terminology. I'm using "backdoor" to describe CPU behaviour that alters its security properties in an attacker controlled way. With that definition the ability to update microcode is not in itself a backdoor, as making use of it is under the user's control. Obviously it can be used to deliver a backdoor, but that is an event that has never been observed.
sure, you have to open that door for it to sneak its blob in, unlike other vendor-backdooring systems at higher levels of enshittifiability, and it's presumably not universal, unlike other vendor-backdooring systems, but it seems specious to not consider it a backdoor.
but I get that you're speaking of a theoretical backdoor they could conceivably install if you open the preexisting backdoor to them. that amounts to dismissing the known, actual backdoor to distract yourself with a theoretical one.
@lxo It's an advertised feature that does nothing unless the operating system actively engages with it. A backdoor is something that's hidden from the user, and which directly gives someone else access to something they shouldn't. Introducing hardcoded credentials into sshd would be a backdoor - an advertised feature that has no security impact unless someone actively makes use of it isn't.
see, in your post you show you trust Intel to not be an attacker, because you imply Intel should have access to the innards of your computer. well, not mine. if I'm not allowed to change those bits, nobody should.
and if it didn't have any security impact, why do we seem to always end up talking about security when the topic is microcode?
but, sure, if you don't want to call it a backdoor, what kind of door do you want to call it? sneakydoor? sidedoor? bottomdoor? frontdoor? masterdoor?
@lxo it's advertised in the same way as paging is, even if CR3 is never mentioned in user-facing adverts. It's not hidden. If you want to argue that we should do more to educate users on the tradeoffs of using proprietary blobs, then I would absolutely agree with you - but so far we have a track record of Intel shipping updates that do block the demonstrated attacks, and not of them violating existing security assumptions in the process. The available evidence is that they improve security.