lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK7LNATjV39QfXJH=pfXnMF-vHmJHhSu+RyFC7BuBWqcpqZWDA@mail.gmail.com>
Date:   Thu, 23 Feb 2017 17:29:03 +0900
From:   Masahiro Yamada <yamada.masahiro@...ionext.com>
To:     Nicholas Mc Guire <der.herr@...r.at>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        linux-clk <linux-clk@...r.kernel.org>,
        linux-mtd@...ts.infradead.org,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH RFC] iopoll: allow for poll_timeout to back-off

Hi Nicholas,


2017-02-14 1:52 GMT+09:00 Nicholas Mc Guire <der.herr@...r.at>:

>
> One case in /drivers/mmc/host/sdhci-cadence.c has a timeout_us of 1 and
> sleep_us of 0 which might actually be a bug as it means that the poll loop
> would do a single read OP in many cases (this is non-atomic context) so it
> would have to be robust for a single read, thus the poll_timeout might not
> be suitable - not sure though - but thats a different problem.

Sorry, I could not understand why you thought sdhci-cadence.c seemed a bug.

The SDHCI_CDNS_HRS06_TUNE_UP is auto-cleared by the hardware.
This will normally be done really soon (within 1us),
but it may not be cleared if the hardware is in trouble.
It is the intention of the code:

    readl_poll_timeout(reg, tmp, !(tmp & SDHCI_CDNS_HRS06_TUNE_UP),
                       0, 1);




>  which results in min,max values for the initial iterations of:
>       min  max
>       0    1
>       0    2
>       1    4
>       2    8
>       4    16
>       8    32
>       16   64
>       32   128
>       ...


I notice you were sending this to me, but
please notice I am not responsible for this file
(include/linux/iopoll.h) in any way.

Please take my comments for grain of salt:

With this patch, the sleep range is doubled in each iteration.
Let's say this routine is called with delay_us=1, timeout_us=50,
then it slept 16 us, then 32 us.
If it sleeps 64 us in the next iteration,
it ends up with sleeping 112 us (=16 + 32 + 64) in total
where we know waiting 50 us is enough.
So, the sleep range granularity may get bigger than
users' intention.

Probably, waiting too long is not a problem in most cases.
If so, what is the meaning of the argument "sleep_us" after all?




-- 
Best Regards
Masahiro Yamada

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ