[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <61596c4c-3849-99d5-b0aa-6ad6b415dff9@intel.com>
Date: Mon, 19 Apr 2021 10:58:44 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Brijesh Singh <brijesh.singh@....com>,
Borislav Petkov <bp@...en8.de>
Cc: linux-kernel@...r.kernel.org, x86@...nel.org, kvm@...r.kernel.org,
linux-crypto@...r.kernel.org, ak@...ux.intel.com,
herbert@...dor.apana.org.au, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Joerg Roedel <jroedel@...e.de>,
"H. Peter Anvin" <hpa@...or.com>, Tony Luck <tony.luck@...el.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Tom Lendacky <thomas.lendacky@....com>,
David Rientjes <rientjes@...gle.com>,
Sean Christopherson <seanjc@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [RFC Part2 PATCH 04/30] x86/mm: split the physmap when adding the
page in RMP table
On 4/19/21 10:46 AM, Brijesh Singh wrote:
> - guest wants to make gpa 0x1000 as a shared page. To support this, we
> need to psmash the large RMP entry into 512 4K entries. The psmash
> instruction breaks the large RMP entry into 512 4K entries without
> affecting the previous validation. Now the we need to force the host to
> use the 4K page level instead of the 2MB.
>
> To my understanding, Linux kernel fault handler does not build the page
> tables on demand for the kernel addresses. All kernel addresses are
> pre-mapped on the boot. Currently, I am proactively spitting the physmap
> to avoid running into situation where x86 page level is greater than the
> RMP page level.
In other words, if the host maps guest memory with 2M mappings, the
guest can induce page faults in the host. The only way the host can
avoid this is to map everything with 4k mappings.
If the host does not avoid this, it could end up in the situation where
it gets page faults on access to kernel data structures. Imagine if a
kernel stack page ended up in the same 2M mapping as a guest page. I
*think* the next write to the kernel stack would end up double-faulting.
Powered by blists - more mailing lists