lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190710201132.GB3066@minyard.net>
Date:   Wed, 10 Jul 2019 15:11:32 -0500
From:   Corey Minyard <minyard@....org>
To:     Tejun Heo <tj@...nel.org>
Cc:     openipmi-developer@...ts.sourceforge.net,
        linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH] ipmi_si_intf: use usleep_range() instead of busy looping

On Wed, Jul 10, 2019 at 07:22:21AM -0700, Tejun Heo wrote:
> Hello,
> 
> > > We can go for shorter timeouts for sure but I don't think this sort of
> > > busy looping is acceptable.  Is your position that this must be a busy
> > > loop?
> > 
> > Well, no.  I want something that provides as high a throughput as
> > possible and doesn't cause scheduling issues.  But that may not be
> > possible.  Screwing up the scheduler is a lot worse than slow IPMI
> > firmware updates.
> > 
> > How short can the timeouts be and avoid issues?
> 
> We first tried msleep(1) and that was too slow even for sensor reading
> making it take longer than 50s.  With the 100us-200us sleep, it got
> down to ~5s which was good enough for our use case and the cpu /
> scheduler impact was still mostly negligible.  I can't tell for sure
> without testing but going significantly below 100us is likely to
> become visible pretty quickly.

What was the time to read the sensors before you did the change?
It depends a lot on the system, so I can't really guess.

> 
> We can also take a hybrid approach where we busy poll w/ 1us udelay
> upto, say, fifty times and then switch to sleeping poll.

I'm pretty sure we didn't try that in the original work, but I'm
not sure that would work.  Most of the initial spinning would be
pointless.

I would guess that you would decrease the delay and the performance
would improve linearly until you hit a certain point, and then
decreasing the delay wouldn't make a big difference.  That's the
point you want to use, I think.

What might actually be best would be for the driver to measure the
time it takes the BMC to respond and try to set the timeout based
on that information.  BMCs vary a lot, a constant probably won't
work.

And I was just saying that I wasn't expecting any big changes in
the IPMI driver any more...

> 
> Are there some tests which can be used to verify the cases which may
> get impacted by these changes?

Unfortunately not.  The original people at Dell that did the work
don't work there any more, I don't think.

I mostly use qemu now for testing, but this is not something you can
really simulate on qemu very well.  Can you do an IPMI firmware
update on your system?  That would be the easiest way to measure.

Thanks,

-corey

> 
> Thanks.
> 
> -- 
> tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ