lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 9 Jul 2019 16:46:02 -0500
From:   Corey Minyard <minyard@....org>
To:     Tejun Heo <tj@...nel.org>
Cc:     openipmi-developer@...ts.sourceforge.net,
        linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH] ipmi_si_intf: use usleep_range() instead of busy looping

On Tue, Jul 09, 2019 at 02:06:43PM -0700, Tejun Heo wrote:
> ipmi_thread() uses back-to-back schedule() to poll for command
> completion which, on some machines, can push up CPU consumption and
> heavily tax the scheduler locks leading to noticeable overall
> performance degradation.
> 
> This patch replaces schedule() with usleep_range(100, 200).  This
> allows the sensor readings to finish resonably fast and the cpu
> consumption of the kthread is kept under several percents of a core.

The IPMI thread was not really designed for sensor reading, it was
designed so that firmware updates would happen in a reasonable time
on systems without an interrupt on the IPMI interface.  This change
will degrade performance for that function.  IIRC correctly the
people who did the patch tried this and it slowed things down too
much.

I'm also a little confused because the CPU in question shouldn't
be doing anything else if the schedule() immediately returns here,
so it's not wasting CPU that could be used on another process.  Or
is it lock contention that is causing an issue on other CPUs?

IMHO, this whole thing is stupid; if you design hardware with
stupid interfaces (byte at a time, no interrupts) you should
expect to get bad performance.  But I can't control what the
hardware vendors do.  This whole thing is a carefully tuned
compromise.

So I can't really take this as-is.

-corey

> 
> Signed-off-by: Tejun Heo <tj@...nel.org>
> ---
>  drivers/char/ipmi/ipmi_si_intf.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
> index f124a2d2bb9f..2143e3c10623 100644
> --- a/drivers/char/ipmi/ipmi_si_intf.c
> +++ b/drivers/char/ipmi/ipmi_si_intf.c
> @@ -1010,7 +1010,7 @@ static int ipmi_thread(void *data)
>  		if (smi_result == SI_SM_CALL_WITHOUT_DELAY)
>  			; /* do nothing */
>  		else if (smi_result == SI_SM_CALL_WITH_DELAY && busy_wait)
> -			schedule();
> +			usleep_range(100, 200);
>  		else if (smi_result == SI_SM_IDLE) {
>  			if (atomic_read(&smi_info->need_watch)) {
>  				schedule_timeout_interruptible(100);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ