[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <35873f78-4935-492e-a9fe-ef06c1b2d0f4@amd.com>
Date: Mon, 2 Dec 2024 15:54:00 -0600
From: Mario Limonciello <mario.limonciello@....com>
To: "Rafael J. Wysocki" <rafael@...nel.org>
Cc: "Rafael J. Wysocki" <rjw@...ysocki.net>,
Linux ACPI <linux-acpi@...r.kernel.org>, LKML
<linux-kernel@...r.kernel.org>, Linux PM <linux-pm@...r.kernel.org>,
Len Brown <len.brown@...el.com>, Arjan van de Ven <arjan@...ux.intel.com>,
Pierre Gondois <pierre.gondois@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Hans de Goede <hdegoede@...hat.com>,
"Gautham R. Shenoy" <gautham.shenoy@....com>
Subject: Re: [RFC/RFT][PATCH v0.1] ACPI: OSL: Use usleep_range() in
acpi_os_sleep()
On 11/22/2024 13:27, Rafael J. Wysocki wrote:
> On Thu, Nov 21, 2024 at 11:27 PM Mario Limonciello
> <mario.limonciello@....com> wrote:
>>
>> On 11/21/2024 07:15, Rafael J. Wysocki wrote:
>>> From: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
>>>
>>> As stated by Len in [1], the extra delay added by msleep() to the
>>> sleep time value passed to it can be significant, roughly between
>>> 1.5 ns on systems with HZ = 1000 and as much as 15 ms on systems with
>>> HZ = 100, which is hardly acceptable, at least for small sleep time
>>> values.
>>>
>>> Address this by using usleep_range() in acpi_os_sleep() instead of
>>> msleep(). For short sleep times this is a no-brainer, but even for
>>> long sleeps usleep_range() should be preferred because timer wheel
>>> timers are optimized for cancellation before they expire and this
>>> particular timer is not going to be canceled.
>>>
>>> Add at least 50 us on top of the requested sleep time in case the
>>> timer can be subject to coalescing, which is consistent with what's
>>> done in user space in this context [2], but for sleeps longer than 5 ms
>>> use 1% of the requested sleep time for this purpose.
>>>
>>> The rationale here is that longer sleeps don't need that much of a timer
>>> precision as a rule and making the timer a more likely candidate for
>>> coalescing in these cases is generally desirable. It starts at 5 ms so
>>> that the delta between the requested sleep time and the effective
>>> deadline is a contiuous function of the former.
>>>
>>> Link: https://lore.kernel.org/linux-pm/c7db7e804c453629c116d508558eaf46477a2d73.1731708405.git.len.brown@intel.com/ [1]
>>> Link: https://lore.kernel.org/linux-pm/CAJvTdK=Q1kwWA6Wxn8Zcf0OicDEk6cHYFAvQVizgA47mXu63+g@mail.gmail.com/ [2]
>>> Reported-by: Len Brown <lenb@...nel.org>
>>> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
>>
>> You probably should also pick up this tag from the earlier version.
>>
>> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=216263
>
> Good point.
>
>>> ---
>>>
>>> This is a follow-up to the discussion started by [1] above and since
>>> the beginning of it I have changed my mind a bit, as you can see.
>>>
>>> Given Arjan's feedback, I've concluded that using usleep_range() for
>>> all sleep values is the right choice and that some slack should be
>>> used there. I've taken 50 us as the minimum value of it because that's
>>> what is used in user space FWICT and I'm not convinced that shorter
>>> values would be suitable here.
>>>
>>> The other part, using 1% of the sleep time as the slack for longer
>>> sleeps, is likely more controversial. It is roughly based on the
>>> observation that if one timer interrupt is sufficient for something,
>>> then using two of them will be wasteful even if this is just somewhat.
>>>
>>> Anyway, please let me know what you think. I'd rather do whatever
>>> the majority of you are comfortable with.
>>
>> Generally I'm fine with this.
>>
>> I'm about to head on US holiday, but I will forward this to folks that
>> aren't and get some testing input on it to bring back later when I'm back.
>
> Thanks!
Hi Rafael,
I loaded this onto my personal laptop before the holiday and also got
others in AMD to do testing on a wider variety of client hardware.
No concerns were raised with this patch.
Feel free to include:
Reviewed-by: Mario Limonciello <mario.limonciello@....com>
Tested-by: Mario Limonciello <mario.limonciello@....com>
Powered by blists - more mailing lists