[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230201105305-mutt-send-email-mst@kernel.org>
Date: Wed, 1 Feb 2023 11:02:09 -0500
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Christophe de Dinechin Dupont de Dinechin <cdupontd@...hat.com>
Cc: Christophe de Dinechin <dinechin@...hat.com>,
James Bottomley <jejb@...ux.ibm.com>,
"Reshetova, Elena" <elena.reshetova@...el.com>,
Leon Romanovsky <leon@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Shishkin, Alexander" <alexander.shishkin@...el.com>,
"Shutemov, Kirill" <kirill.shutemov@...el.com>,
"Kuppuswamy, Sathyanarayanan" <sathyanarayanan.kuppuswamy@...el.com>,
"Kleen, Andi" <andi.kleen@...el.com>,
"Hansen, Dave" <dave.hansen@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
"Wunner, Lukas" <lukas.wunner@...el.com>,
Mika Westerberg <mika.westerberg@...ux.intel.com>,
Jason Wang <jasowang@...hat.com>,
"Poimboe, Josh" <jpoimboe@...hat.com>,
"aarcange@...hat.com" <aarcange@...hat.com>,
Cfir Cohen <cfir@...gle.com>, Marc Orr <marcorr@...gle.com>,
"jbachmann@...gle.com" <jbachmann@...gle.com>,
"pgonda@...gle.com" <pgonda@...gle.com>,
"keescook@...omium.org" <keescook@...omium.org>,
James Morris <jmorris@...ei.org>,
Michael Kelley <mikelley@...rosoft.com>,
"Lange, Jon" <jlange@...rosoft.com>,
"linux-coco@...ts.linux.dev" <linux-coco@...ts.linux.dev>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Kernel Hardening <kernel-hardening@...ts.openwall.com>
Subject: Re: Linux guest kernel threat model for Confidential Computing
On Wed, Feb 01, 2023 at 02:15:10PM +0100, Christophe de Dinechin Dupont de Dinechin wrote:
>
>
> > On 1 Feb 2023, at 12:01, Michael S. Tsirkin <mst@...hat.com> wrote:
> >
> > On Wed, Feb 01, 2023 at 11:52:27AM +0100, Christophe de Dinechin Dupont de Dinechin wrote:
> >>
> >>
> >>> On 31 Jan 2023, at 18:39, Michael S. Tsirkin <mst@...hat.com> wrote:
> >>>
> >>> On Tue, Jan 31, 2023 at 04:14:29PM +0100, Christophe de Dinechin wrote:
> >>>> Finally, security considerations that apply irrespective of whether the
> >>>> platform is confidential or not are also outside of the scope of this
> >>>> document. This includes topics ranging from timing attacks to social
> >>>> engineering.
> >>>
> >>> Why are timing attacks by hypervisor on the guest out of scope?
> >>
> >> Good point.
> >>
> >> I was thinking that mitigation against timing attacks is the same
> >> irrespective of the source of the attack. However, because the HV
> >> controls CPU time allocation, there are presumably attacks that
> >> are made much easier through the HV. Those should be listed.
> >
> > Not just that, also because it can and does emulate some devices.
> > For example, are disk encryption systems protected against timing of
> > disk accesses?
> > This is why some people keep saying "forget about emulated devices, require
> > passthrough, include devices in the trust zone".
> >
> >>>
> >>>> </doc>
> >>>>
> >>>> Feel free to comment and reword at will ;-)
> >>>>
> >>>>
> >>>> 3/ PCI-as-a-threat: where does that come from
> >>>>
> >>>> Isn't there a fundamental difference, from a threat model perspective,
> >>>> between a bad actor, say a rogue sysadmin dumping the guest memory (which CC
> >>>> should defeat) and compromised software feeding us bad data? I think there
> >>>> is: at leats inside the TCB, we can detect bad software using measurements,
> >>>> and prevent it from running using attestation. In other words, we first
> >>>> check what we will run, then we run it. The security there is that we know
> >>>> what we are running. The trust we have in the software is from testing,
> >>>> reviewing or using it.
> >>>>
> >>>> This relies on a key aspect provided by TDX and SEV, which is that the
> >>>> software being measured is largely tamper-resistant thanks to memory
> >>>> encryption. In other words, after you have measured your guest software
> >>>> stack, the host or hypervisor cannot willy-nilly change it.
> >>>>
> >>>> So this brings me to the next question: is there any way we could offer the
> >>>> same kind of service for KVM and qemu? The measurement part seems relatively
> >>>> easy. Thetamper-resistant part, on the other hand, seems quite difficult to
> >>>> me. But maybe someone else will have a brilliant idea?
> >>>>
> >>>> So I'm asking the question, because if you could somehow prove to the guest
> >>>> not only that it's running the right guest stack (as we can do today) but
> >>>> also a known host/KVM/hypervisor stack, we would also switch the potential
> >>>> issues with PCI, MSRs and the like from "malicious" to merely "bogus", and
> >>>> this is something which is evidently easier to deal with.
> >>>
> >>> Agree absolutely that's much easier.
> >>>
> >>>> I briefly discussed this with James, and he pointed out two interesting
> >>>> aspects of that question:
> >>>>
> >>>> 1/ In the CC world, we don't really care about *virtual* PCI devices. We
> >>>> care about either virtio devices, or physical ones being passed through
> >>>> to the guest. Let's assume physical ones can be trusted, see above.
> >>>> That leaves virtio devices. How much damage can a malicious virtio device
> >>>> do to the guest kernel, and can this lead to secrets being leaked?
> >>>>
> >>>> 2/ He was not as negative as I anticipated on the possibility of somehow
> >>>> being able to prevent tampering of the guest. One example he mentioned is
> >>>> a research paper [1] about running the hypervisor itself inside an
> >>>> "outer" TCB, using VMPLs on AMD. Maybe something similar can be achieved
> >>>> with TDX using secure enclaves or some other mechanism?
> >>>
> >>> Or even just secureboot based root of trust?
> >>
> >> You mean host secureboot? Or guest?
> >>
> >> If it’s host, then the problem is detecting malicious tampering with
> >> host code (whether it’s kernel or hypervisor).
> >
> > Host. Lots of existing systems do this. As an extreme boot a RO disk,
> > limit which packages are allowed.
>
> Is that provable to the guest?
>
> Consider a cloud provider doing that: how do they prove to their guest:
>
> a) What firmware, kernel and kvm they run
>
> b) That what they booted cannot be maliciouly modified, e.g. by a rogue
> device driver installed by a rogue sysadmin
>
> My understanding is that SecureBoot is only intended to prevent non-verified
> operating systems from booting. So the proof is given to the cloud provider,
> and the proof is that the system boots successfully.
I think I should have said measured boot not secure boot.
>
> After that, I think all bets are off. SecureBoot does little AFAICT
> to prevent malicious modifications of the running system by someone with
> root access, including deliberately loading a malicious kvm-zilog.ko
So disable module loading then or don't allow root access?
>
> It does not mean it cannot be done, just that I don’t think we
> have the tools at the moment.
Phones, chromebooks do this all the time ...
> >
> >> If it’s guest, at the moment at least, the measurements do not extend
> >> beyond the TCB.
> >>
> >>>
> >>> --
> >>> MST
>
Powered by blists - more mailing lists