[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m2cz6uk3h9.fsf@redhat.com>
Date: Tue, 31 Jan 2023 17:52:57 +0100
From: Christophe de Dinechin <dinechin@...hat.com>
To: "Reshetova, Elena" <elena.reshetova@...el.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Shishkin, Alexander" <alexander.shishkin@...el.com>,
"Shutemov, Kirill" <kirill.shutemov@...el.com>,
"Kuppuswamy, Sathyanarayanan" <sathyanarayanan.kuppuswamy@...el.com>,
"Kleen, Andi" <andi.kleen@...el.com>,
"Hansen, Dave" <dave.hansen@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
"Wunner, Lukas" <lukas.wunner@...el.com>,
Mika Westerberg <mika.westerberg@...ux.intel.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
"Poimboe, Josh" <jpoimboe@...hat.com>,
"aarcange@...hat.com" <aarcange@...hat.com>,
Cfir Cohen <cfir@...gle.com>, Marc Orr <marcorr@...gle.com>,
"jbachmann@...gle.com" <jbachmann@...gle.com>,
"pgonda@...gle.com" <pgonda@...gle.com>,
"keescook@...omium.org" <keescook@...omium.org>,
James Morris <jmorris@...ei.org>,
Michael Kelley <mikelley@...rosoft.com>,
"Lange, Jon" <jlange@...rosoft.com>,
"linux-coco@...ts.linux.dev" <linux-coco@...ts.linux.dev>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux guest kernel threat model for Confidential Computing
On 2023-01-31 at 10:06 UTC, "Reshetova, Elena" <elena.reshetova@...el.com> wrote...
> Hi Dinechin,
Nit: My first name is actually Christophe ;-)
[snip]
>> "The implementation of the #VE handler is simple and does not require an
>> in-depth security audit or fuzzing since it is not the actual consumer of
>> the host/VMM supplied untrusted data": The assumption there seems to be that
>> the host will never be able to supply data (e.g. through a bounce buffer)
>> that it can trick the guest into executing. If that is indeed the
>> assumption, it is worth mentioning explicitly. I suspect it is a bit weak,
>> since many earlier attacks were based on executing the wrong code. Notably,
>> it is worth pointing out that I/O buffers are _not_ encrypted with the CPU
>> key (as opposed to any device key e.g. for PCI encryption) in either
>> TDX or SEV. Is there for example anything that precludes TDX or SEV from
>> executing code in the bounce buffers?
>
> This was already replied by Kirill, any code execution out of shared memory generates
> a #GP.
Apologies for my wording. Everyone interpreted "executing" as "executing
directly on the bounce buffer page", when what I meant is "consuming data
fetched from the bounce buffers as code" (not necessarily directly).
For example, in the diagram in your document, the guest kernel is a
monolithic piece. In reality, there are dynamically loaded components. In
the original SEV implementation, with pre-attestation, the measurement could
only apply before loading any DLKM (I believe, not really sure). As another
example, SEVerity (CVE-2020-12967 [1]) worked by injecting a payload
directly into the guest kernel using virtio-based network I/O. That is what
I referred to when I wrote "many earlier attacks were based on executing the
wrong code".
The fact that I/O buffers are not encrypted matters here, because it gives
the host ample latitude to observe or even corrupt all I/Os, as many others
have pointed out. Notably, disk crypto may not be designed to resist to a
host that can see and possibly change the I/Os.
So let me rephrase my vague question as a few more precise ones:
1) What are the effects of semi-random kernel code injection?
If the host knows that a given bounce buffer happens to be used later to
execute some kernel code, it can start flipping bits in it to try and
trigger arbitrary code paths in the guest. My understanding is that
crypto alone (i.e. without additional layers like dm-integrity) will
happily decrypt that into a code stream with pseudo-random instructions
in it, not vehemently error out.
So, while TDX precludes the host from writing into guest memory directly,
since the bounce buffers are shared, TDX will not prevent the host from
flipping bits there. It's then just a matter of guessing where the bits
will go, and hoping that some bits execute at guest PL0. Of course, this
can be mitigated by either only using static configs, or using
dm-verity/dm-integrity, or maybe some other mechanisms.
Shouldn't that be part of your document? To be clear: you mention under
"Storage protection" that you use dm-crypt and dm-integrity, so I believe
*you* know, but your readers may not figure out why dm-integrity is
integral to the process, notably after you write "Users could use other
encryption schemes".
2) What are the effects of random user code injection?
It's the same as above, except that now you can target a much wider range
of input data, including shell scripts, etc. So the attack surface is
much larger.
3) What is the effect of data poisoning?
You don't necessarily need to corrupt code. Being able to corrupt a
system configuration file for example can be largely sufficient.
4) Are there I/O-based replay attacks that would work pre-attestation?
My current mental model is that you load a "base" software stack into the
TCB and then measure a relevant part of it. What you measure is somewhat
implementation-dependent, but in the end, if the system is attested, you
respond to a cryptographic challenge based on what was measured, and you
then get relevant secrets, e.g. a disk decryption key, that let you make
forward progress. However, what happens if every time you boot, the host
feeds you bogus disk data just to try to steer the boot sequence along
some specific path?
I believe that the short answer is: the guest either:
a) reaches attestation, but with bad in-memory data, so it fails the
crypto exchange, and secrets are not leaked.
b) does not reach attestation, so never gets the secrets, and therefore
still fulfils the CC promise of not leaking secrets.
So I personally feel this is OK, but it's worth writing up in your doc.
Back to the #VE handler, if I can find a way to inject malicious code into
my guest, what you wrote in that paragraph as a justification for no
in-depth security still seems like "not exactly defense in depth". I would
just remove the sentence, audit and fuzz that code with the same energy as
for anything else that could face bad input.
[1]: https://www.sec.in.tum.de/i20/student-work/code-execution-attacks-against-encrypted-virtual-machines
--
Cheers,
Christophe de Dinechin (https://c3d.github.io)
Theory of Incomplete Measurements (https://c3d.github.io/TIM)
Powered by blists - more mailing lists