lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 12 May 2021 07:33:56 +0200
From:   Willy Tarreau <w@....eu>
To:     Mark Brown <broonie@...nel.org>
Cc:     linux-kernel@...r.kernel.org
Subject: Re: [PATCH] tools/nolibc: Implement msleep()

Hi Mark,

On Tue, May 11, 2021 at 12:01:59PM +0100, Mark Brown wrote:
> +static __attribute__((unused))
> +void msleep(unsigned int msecs)
> +{
> +	struct timeval my_timeval = { 0, msecs * 1000 };
> +
> +	sys_select(0, 0, 0, 0, &my_timeval);
> +}
> +

Just a quick question, is there any reason for not keeping most of the
precision like this and allow applications to use it beyond 4294 seconds
like this ?

	struct timeval my_timeval = { msecs / 1000, (msecs % 1000) * 1000 };

Another thing that comes to my mind is that sleep() returns the remaining
number of seconds if the syscall was interrupted, and I think it could be
very useful in small tests programs to do the same at the subsecond level
in simple scheduling loops for example. Copying what we're doing in sleep()
we could have this:

        if (sys_select(0, 0, 0, 0, &my_timeval) < 0)
                return my_timeval.tv_sec * 1000 + (my_timeval.tv_usec + 999) / 1000;
        else
                return 0;

And since that's an inline function it will be optimized away if the result
is not used anyway, resulting in the same code as the void version in this
case.

What do you think ?

Thanks!
Willy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ