[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231016205419.c3sfriemyaiczxie@amd.com>
Date: Mon, 16 Oct 2023 15:54:19 -0500
From: Michael Roth <michael.roth@....com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
CC: Borislav Petkov <bp@...en8.de>, Andy Lutomirski <luto@...nel.org>,
"Dave Hansen" <dave.hansen@...el.com>,
Sean Christopherson <seanjc@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Joerg Roedel <jroedel@...e.de>,
Ard Biesheuvel <ardb@...nel.org>,
Andi Kleen <ak@...ux.intel.com>,
"Kuppuswamy Sathyanarayanan"
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
David Rientjes <rientjes@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Tom Lendacky <thomas.lendacky@....com>,
Thomas Gleixner <tglx@...utronix.de>,
"Peter Zijlstra" <peterz@...radead.org>,
Paolo Bonzini <pbonzini@...hat.com>,
"Ingo Molnar" <mingo@...hat.com>,
Dario Faggioli <dfaggioli@...e.com>,
Mike Rapoport <rppt@...nel.org>,
David Hildenbrand <david@...hat.com>,
Mel Gorman <mgorman@...hsingularity.net>,
<marcelo.cerri@...onical.com>, <tim.gardner@...onical.com>,
<philip.cox@...onical.com>, <aarcange@...hat.com>,
<peterx@...hat.com>, <x86@...nel.org>, <linux-mm@...ck.org>,
<linux-coco@...ts.linux.dev>, <linux-efi@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <stable@...nel.org>,
Nikolay Borisov <nik.borisov@...e.com>
Subject: Re: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel
memory acceptance
On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote:
> Michael reported soft lockups on a system that has unaccepted memory.
> This occurs when a user attempts to allocate and accept memory on
> multiple CPUs simultaneously.
>
> The root cause of the issue is that memory acceptance is serialized with
> a spinlock, allowing only one CPU to accept memory at a time. The other
> CPUs spin and wait for their turn, leading to starvation and soft lockup
> reports.
>
> To address this, the code has been modified to release the spinlock
> while accepting memory. This allows for parallel memory acceptance on
> multiple CPUs.
>
> A newly introduced "accepting_list" keeps track of which memory is
> currently being accepted. This is necessary to prevent parallel
> acceptance of the same memory block. If a collision occurs, the lock is
> released and the process is retried.
>
> Such collisions should rarely occur. The main path for memory acceptance
> is the page allocator, which accepts memory in MAX_ORDER chunks. As long
> as MAX_ORDER is equal to or larger than the unit_size, collisions will
> never occur because the caller fully owns the memory block being
> accepted.
>
> Aside from the page allocator, only memblock and deferered_free_range()
> accept memory, but this only happens during boot.
>
> The code has been tested with unit_size == 128MiB to trigger collisions
> and validate the retry codepath.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> Reported-by: Michael Roth <michael.roth@....com
Tested-by: Michael Roth <michael.roth@....com>
This seems to improve things pretty dramatically for me. Previously I
saw soft-lockups with 16 vCPUs and 16 processes faulting into memory,
and now I can do 128+ vCPUs/processes.
I can still trigger soft lock-ups on occassion if the number of processes
faulting in memory exceeds the number of vCPUs available to the guest, but
with a 32 vCPU guest even something like this:
stress --vm 128 --vm-bytes 2G --vm-keep --cpu 255
still seems to avoid the soft lock-up messages. So that's probably well
into "potential future optimization" territory and this patch fixes the
more immediate issues.
Thanks!
-Mike
> Fixes: 2053bc57f367 ("efi: Add unaccepted memory support")
> Cc: <stable@...nel.org>
> Reviewed-by: Nikolay Borisov <nik.borisov@...e.com>
> ---
>
> v2:
> - Fix deadlock (Vlastimil);
> - Fix comments (Vlastimil);
> - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called
> from atomic context;
>
Powered by blists - more mailing lists