[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191116063106.GA18194@ming.t460p>
Date: Sat, 16 Nov 2019 14:31:06 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Dave Chinner <david@...morbit.com>
Cc: linux-block@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-xfs@...r.kernel.org, linux-kernel@...r.kernel.org,
Jeff Moyer <jmoyer@...hat.com>,
Dave Chinner <dchinner@...hat.com>,
Eric Sandeen <sandeen@...hat.com>,
Christoph Hellwig <hch@....de>, Jens Axboe <axboe@...nel.dk>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Tejun Heo <tj@...nel.org>
Subject: Re: single aio thread is migrated crazily by scheduler
On Sat, Nov 16, 2019 at 10:40:05AM +1100, Dave Chinner wrote:
> On Fri, Nov 15, 2019 at 03:08:43PM +0800, Ming Lei wrote:
> > On Fri, Nov 15, 2019 at 03:56:34PM +1100, Dave Chinner wrote:
> > > On Fri, Nov 15, 2019 at 09:08:24AM +0800, Ming Lei wrote:
> > I can reproduce the issue with 4k block size on another RH system, and
> > the login info of that system has been shared to you in RH BZ.
> >
> > 1)
> > sysctl kernel.sched_min_granularity_ns=10000000
> > sysctl kernel.sched_wakeup_granularity_ns=15000000
>
> So, these settings definitely influence behaviour.
>
> If these are set to kernel defaults (4ms and 3ms each):
>
> sysctl kernel.sched_min_granularity_ns=4000000
> sysctl kernel.sched_wakeup_granularity_ns=3000000
>
> The migration problem largely goes away - the fio task migration
> event count goes from ~2,000 a run down to 200/run.
>
> That indicates that the migration trigger is likely load/timing
> based. The analysis below is based on the 10/15ms numbers above,
> because it makes it so much easier to reproduce.
On another machine, './xfs_complete 512' may be migrated 11~12K/sec,
which don't need to change the above two kernel sched defaults, however
the fio io thread only takes 40% CPU.
'./xfs_complete 4k' on this machine, the fio IO CPU utilization is >= 98%.
Thanks,
Ming
Powered by blists - more mailing lists