[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <X/dEQRZpSb+oQloX@google.com>
Date: Thu, 7 Jan 2021 09:26:25 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Ashish Kalra <ashish.kalra@....com>
Cc: Steve Rutherford <srutherford@...gle.com>,
"Dr. David Alan Gilbert" <dgilbert@...hat.com>,
"Singh, Brijesh" <brijesh.singh@....com>,
Paolo Bonzini <pbonzini@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Joerg Roedel <joro@...tes.org>,
Borislav Petkov <bp@...e.de>,
"Lendacky, Thomas" <Thomas.Lendacky@....com>,
X86 ML <x86@...nel.org>, KVM list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
"dovmurik@...ux.vnet.ibm.com" <dovmurik@...ux.vnet.ibm.com>,
"tobin@....com" <tobin@....com>,
"jejb@...ux.ibm.com" <jejb@...ux.ibm.com>,
"frankeh@...ibm.com" <frankeh@...ibm.com>, jon.grimm@....com
Subject: Re: [PATCH v2 1/9] KVM: x86: Add AMD SEV specific Hypercall3
On Thu, Jan 07, 2021, Ashish Kalra wrote:
> Hello Steve,
>
> On Wed, Jan 06, 2021 at 05:01:33PM -0800, Steve Rutherford wrote:
> > Avoiding an rbtree for such a small (but unstable) list seems correct.
> >
> > For the unencrypted region list strategy, the only questions that I
> > have are fairly secondary.
> > - How should the kernel upper bound the size of the list in the face
> > of malicious guests, but still support large guests? (Something
> > similar to the size provided in the bitmap API would work).
>
> I am thinking of another scenario, where a malicious guest can make
> infinite/repetetive hypercalls and DOS attack the host.
>
> But probably this is a more generic issue, this can be done by any guest
> and under any hypervisor, i don't know what kind of mitigations exist
> for such a scenario ?
>
> Potentially, the guest memory donation model can handle such an attack,
> because in this model, the hypervisor will expect only one hypercall,
> any repetetive hypercalls can make the hypervisor disable the guest ?
KVM doesn't need to explicitly bound its tracking structures, it just needs to
use GFP_KERNEL_ACCOUNT when allocating kernel memory for the structures so that
the memory will be accounted to the task/process/VM. Shadow MMU pages are the
only exception that comes to mind; they're still accounted properly, but KVM
also explicitly limits them for a variety of reasons.
The size of the list will naturally be bounded by the size of the guest; and
assuming KVM proactively merges adjancent regions, that upper bound is probably
reasonably low compared to other allocations, e.g. the aforementioned MMU pages.
And, using a list means a malicious guest will get automatically throttled as
the latency of walking the list (to merge/delete existing entries) will increase
with the size of the list.
Powered by blists - more mailing lists