lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220902221340.GA379310@bhelgaas>
Date:   Fri, 2 Sep 2022 17:13:40 -0500
From:   Bjorn Helgaas <helgaas@...nel.org>
To:     Will McVicker <willmcvicker@...gle.com>
Cc:     Bjorn Helgaas <bhelgaas@...gle.com>, kernel-team@...roid.com,
        Sajid Dalvi <sdalvi@...gle.com>,
        Matthias Kaehlcke <mka@...omium.org>,
        linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] PCI/PM: Switch D3Hot delay to also use usleep_range

On Wed, Aug 17, 2022 at 11:08:21PM +0000, Will McVicker wrote:
> From: Sajid Dalvi <sdalvi@...gle.com>
> 
> Since the PCI spec requires a 10ms D3Hot delay (defined by
> PCI_PM_D3HOT_WAIT) and a few of the PCI quirks update the D3Hot delay up
> to 120ms, let's add support for both usleep_range and msleep based on
> the delay time to improve the delay accuracy.
> 
> This patch is based off of a commit from Sajid Dalvi <sdalvi@...gle.com>
> in the Pixel 6 kernel tree [1]. Testing on a Pixel 6, found that the
> 10ms delay for the Exynos PCIe device was on average delaying for 19ms
> when the spec requires 10ms. Switching from msleep to uslseep_range
> therefore decreases the resume time on a Pixel 6 on average by 9ms.

Add the "PCIe r6.0, sec 5.9" spec reference for the 10ms delay for
transitions to or from D3hot.

s/D3Hot/D3hot/ to match other usage (at least in Linux; the spec does
use "D3Hot")

s/uslseep_range/usleep_range/

Add "()" after function names.

In the subject, "Switch ... to *also* use usleep_range": what does the
"also" mean?  It sounds like it's referring to some other place where
we also use usleep_range()?

> [1] https://android.googlesource.com/kernel/gs/+/18a8cad68d8e6d50f339a716a18295e6d987cee3
> 
> Signed-off-by: Sajid Dalvi <sdalvi@...gle.com>
> Signed-off-by: Will McVicker <willmcvicker@...gle.com>
> ---
>  drivers/pci/pci.c | 16 +++++++++++-----
>  1 file changed, 11 insertions(+), 5 deletions(-)
> 
> v3:
>  * Use DIV_ROUND_CLOSEST instead of bit manipulation.
>  * Minor refactor to use max() where relavant.
> 
> v2:
>  * Update to use 20-25% upper bound
>  * Update to use usleep_range() for <=20ms, else use msleep()
> 
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index 95bc329e74c0..cfa8386314f2 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -66,13 +66,19 @@ struct pci_pme_device {
>  
>  static void pci_dev_d3_sleep(struct pci_dev *dev)
>  {
> -	unsigned int delay = dev->d3hot_delay;
> +	unsigned int delay_ms = max(dev->d3hot_delay, pci_pm_d3hot_delay);
>  
> -	if (delay < pci_pm_d3hot_delay)
> -		delay = pci_pm_d3hot_delay;
> +	if (delay_ms) {
> +		if (delay_ms <= 20) {
> +			/* Use a 20% upper bound with 1ms minimum */
> +			unsigned int upper = max(DIV_ROUND_CLOSEST(delay_ms, 5), 1U);
>  
> -	if (delay)
> -		msleep(delay);
> +			usleep_range(delay_ms * USEC_PER_MSEC,
> +				     (delay_ms + upper) * USEC_PER_MSEC);
> +		} else {
> +			msleep(delay_ms);

I hate the fact that we have to check for those ancient Intel chips at
all, but having to read through the usleep_range() vs msleep() thing
is just painful.  

fsleep() would be much simpler, but I am sympathetic that the fsleep()
range of 10-20ms probably wouldn't get the benefit you want.

I wish Documentation/timers/timers-howto.rst included a reason why we
should use msleep() instead of usleep_range() for longer sleeps.  Is
there a reason not to do this:

   static void pci_dev_d3_sleep(struct pci_dev *dev)
   {
        unsigned int delay_ms = max(dev->d3hot_delay, pci_pm_d3hot_delay);
        unsigned int upper;

        if (delay_ms) {
                /* 20% upper bound, 1ms minimum */
                upper = max(DIV_ROUND_CLOSEST(delay_ms, 5), 1U)
                usleep_range(delay_ms * USEC_PER_MSEC,
                             (delay_ms + upper) * USEC_PER_MSEC);
        }
   }

Since the Intel quirk is for 120ms, a 20% upper bound would make the
range 120-144ms.  Would that be a problem?  Those chips are ancient;
the list is untouched since it was added in 2006.  The point of
usleep_range() is to allow the scheduler to coalesce the wakeup with
other events, so it seems unlikely we'd ever wait the whole 144ms.  I
vote for optimizing the readability over sleep/resume time for
already-broken chips.

Bjorn

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ