lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 Nov 2019 12:12:18 +0800
From:   Ming Lei <ming.lei@...hat.com>
To:     Phil Auld <pauld@...hat.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Dave Chinner <david@...morbit.com>,
        linux-block@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-xfs@...r.kernel.org, linux-kernel@...r.kernel.org,
        Jeff Moyer <jmoyer@...hat.com>,
        Dave Chinner <dchinner@...hat.com>,
        Eric Sandeen <sandeen@...hat.com>,
        Christoph Hellwig <hch@....de>, Jens Axboe <axboe@...nel.dk>,
        Ingo Molnar <mingo@...hat.com>, Tejun Heo <tj@...nel.org>,
        Vincent Guittot <vincent.guittot@...aro.org>
Subject: Re: single aio thread is migrated crazily by scheduler

On Wed, Nov 20, 2019 at 05:03:13PM -0500, Phil Auld wrote:
> Hi Peter,
> 
> On Wed, Nov 20, 2019 at 08:16:36PM +0100 Peter Zijlstra wrote:
> > On Tue, Nov 19, 2019 at 07:40:54AM +1100, Dave Chinner wrote:
> > > On Mon, Nov 18, 2019 at 10:21:21AM +0100, Peter Zijlstra wrote:
> > 
> > > > We typically only fall back to the active balancer when there is
> > > > (persistent) imbalance and we fail to migrate anything else (of
> > > > substance).
> > > > 
> > > > The tuning mentioned has the effect of less frequent scheduling, IOW,
> > > > leaving (short) tasks on the runqueue longer. This obviously means the
> > > > load-balancer will have a bigger chance of seeing them.
> > > > 
> > > > Now; it's been a while since I looked at the workqueue code but one
> > > > possible explanation would be if the kworker that picks up the work item
> > > > is pinned. That would make it runnable but not migratable, the exact
> > > > situation in which we'll end up shooting the current task with active
> > > > balance.
> > > 
> > > Yes, that's precisely the problem - work is queued, by default, on a
> > > specific CPU and it will wait for a kworker that is pinned to that
> > 
> > I'm thinking the problem is that it doesn't wait. If it went and waited
> > for it, active balance wouldn't be needed, that only works on active
> > tasks.
> 
> Since this is AIO I wonder if it should queue_work on a nearby cpu by 
> default instead of unbound.  

When the current CPU isn't busy enough, there is still cost for completing
request remotely.

Or could we change queue_work() in the following way?

 * We try to queue the work to the CPU on which it was submitted, but if the
 * CPU dies or is saturated enough it can be processed by another CPU.

Can we decide in a simple or efficient way if the current CPU is saturated
enough?

Thanks,
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ