[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ea97f1c4c03bd5d227f2aeed18163bf11490812c.camel@redhat.com>
Date: Tue, 29 Sep 2020 17:01:14 -0400
From: Qian Cai <cai@...hat.com>
To: "Kaneda, Erik" <erik.kaneda@...el.com>,
"Moore, Robert" <robert.moore@...el.com>,
"Wysocki, Rafael J" <rafael.j.wysocki@...el.com>
Cc: Len Brown <lenb@...nel.org>,
"linux-acpi@...r.kernel.org" <linux-acpi@...r.kernel.org>,
"devel@...ica.org" <devel@...ica.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Paul E. McKenney" <paulmck@...nel.org>
Subject: Re: [PATCH] ACPICA: Fix a soft-lockup on large systems
On Tue, 2020-09-29 at 19:55 +0000, Kaneda, Erik wrote:
> This is acpica code and cond_resched is specific to Linux so we cannot accept
> this in its current form.
Do you have any suggestion?
>
> The execution time of acpi_ns_walk_namespace is relative to the size of the
> acpi namespace. This is determined by the size of firmware..
> If the actual culprit was the traversing the ACPI namespace, you should have a
> soft lock up on acpi_load_tables which is the function that populates the ACPI
> namespace. Your stack trace shows that Linux was able to get past this point.
> Therefore, I'm led to think that the actual problem is the combination of
> walking the namespace + the handler invoked.
>
> What happens if you add the cond_resched in acpi_bus_check_add?
This also works fine.
--- a/drivers/acpi/scan.c
+++ b/drivers/acpi/scan.c
@@ -1881,6 +1881,7 @@ static acpi_status acpi_bus_check_add(acpi_handle handle,
u32 lvl_not_used,
return AE_OK;
}
+ cond_resched();
acpi_add_single_object(&device, handle, type, sta);
if (!device)
return AE_CTRL_DEPTH;
>
> Out of curiosity, does calling cond_resched guarantee that the acpi_init call
> will finish before other kernel components that depend on ACPI are
> initialized?
I don't really see how it could break the dependencies. cond_resched() is just
to avoid stalling the CPU.
Powered by blists - more mailing lists