[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200423113027.GL30814@suse.de>
Date: Thu, 23 Apr 2020 13:30:27 +0200
From: Joerg Roedel <jroedel@...e.de>
To: Bo Gan <ganb@...are.com>
Cc: Mike Stunes <mstunes@...are.com>, Joerg Roedel <joro@...tes.org>,
"x86@...nel.org" <x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Hellstrom <thellstrom@...are.com>,
Jiri Slaby <jslaby@...e.cz>,
Dan Williams <dan.j.williams@...el.com>,
Tom Lendacky <thomas.lendacky@....com>,
Juergen Gross <jgross@...e.com>,
Kees Cook <keescook@...omium.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>
Subject: Re: Re: [PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the
runtime handler
On Wed, Apr 22, 2020 at 06:33:13PM -0700, Bo Gan wrote:
> On 4/15/20 8:53 AM, Joerg Roedel wrote:
> > Hi Mike,
> >
> > On Tue, Apr 14, 2020 at 07:03:44PM +0000, Mike Stunes wrote:
> > > set_memory_decrypted needs to check the return value. I see it
> > > consistently return ENOMEM. I've traced that back to split_large_page
> > > in arch/x86/mm/pat/set_memory.c.
> >
> > I agree that the return code needs to be checked. But I wonder why this
> > happens. The split_large_page() function returns -ENOMEM when
> > alloc_pages() fails. Do you boot the guest with minal RAM assigned?
> >
> > Regards,
> >
> > Joerg
> >
>
> I just want to add some context around this. The call path that lead to the
> failure is like the following:
>
> __alloc_pages_slowpath
> __alloc_pages_nodemask
> alloc_pages_current
> alloc_pages
> split_large_page
> __change_page_attr
> __change_page_attr_set_clr
> __set_memory_enc_dec
> set_memory_decrypted
> sev_es_init_ghcbs
> trap_init -> before mm_init (in init/main.c)
> start_kernel
> x86_64_start_reservations
> x86_64_start_kernel
> secondary_startup_64
>
> At this time, mem_init hasn't been called yet (which would be called by
> mm_init). Thus, the free pages are still owned by memblock. It's in mem_init
> (x86/mm/init_64.c) that memblock_free_all gets called and free pages are
> released.
>
> During testing, I've also noticed that debug_pagealloc=1 will make the issue
> disappear. That's because with debug_pagealloc=1, probe_page_size_mask in
> x86/mm/init.c will not allow large pages (2M/1G). Therefore, no
> split_large_page would happen. Similarly, if CPU doesn't have
> X86_FEATURE_PSE, there won't be large pages either.
>
> Any thoughts? Maybe split_large_page should get pages from memblock at early
> boot?
Thanks for you analysis. I fixed it (verified by Mike) by using
early_set_memory_decrypted() instead of set_memory_decrypted(). I still
wonder why I didn't see that issue on my kernel. It has
DEBUG_PAGEALLOC=y set, but it is not enabled by default and I also
didn't pass the command-line parameter.
Regards,
Joerg
Powered by blists - more mailing lists