lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 18 Dec 2017 12:27:11 -0500
From:   "J. Bruce Fields" <bfields@...ldses.org>
To:     Mike Galbraith <efault@....de>
Cc:     lkml <linux-kernel@...r.kernel.org>,
        Jeff Layton <jlayton@...nel.org>, linux-nfs@...r.kernel.org
Subject: Re: NFS: 82ms wakeup latency 4.14-rc4

On Mon, Dec 18, 2017 at 06:17:36PM +0100, Mike Galbraith wrote:
> On Mon, 2017-12-18 at 18:00 +0100, Mike Galbraith wrote:
> > On Mon, 2017-12-18 at 11:35 -0500, J. Bruce Fields wrote:
> > > 
> > > Like I say, I don't really understand the issues here, so it's more a
> > > question than an objection....  (I don't know any reason a
> > > cond_resched() would be bad there.)
> > 
> > Think of it this way: what all can be queued up behind that kworker
> > that is hogging CPU for huge swaths of time?  It's not only userspace
> > that suffers.
> 
> Bah, I'm gonna sound like a damn Baptist preacher, but I gotta say,
> latency matters just as much to an enterprise NOPREEMPT kernel and its
> users as it does to a desktop kernel and its users.  For max
> throughput, you don't want to do work in _tiny_ quantum, because you
> then lose throughput due to massive cache trashing and scheduling
> overhead, but latency still does matter, and not just a little.

Right, what I don't understand is why kernels are still built without
preemption.  I'd naively assumed that was just a bandaid while we
weren't sure how much old kernel code might still depend on it for
correctness.  I'd forgotten about throughput/latency tradeoffs--but
couldn't those in theory be managed by runtime configuration of the
sceduler, or at least some smaller hammer than turning off preemption
entirely?

(But, again, just idle curiosity on my part, thanks for the answers.)

--b.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ