[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20191115095300.GB9043@quack2.suse.cz>
Date: Fri, 15 Nov 2019 10:53:00 +0100
From: Jan Kara <jack@...e.cz>
To: Hillf Danton <hdanton@...a.com>
Cc: Jan Kara <jack@...e.cz>, linux-mm <linux-mm@...ck.org>,
fsdev <linux-fsdevel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Fengguang Wu <fengguang.wu@...el.com>,
Tejun Heo <tj@...nel.org>, Jan Kara <jack@...e.com>,
Johannes Weiner <hannes@...xchg.org>,
Shakeel Butt <shakeelb@...gle.com>,
Minchan Kim <minchan@...nel.org>, Mel Gorman <mgorman@...e.de>
Subject: Re: [RFC v2] writeback: add elastic bdi in cgwb bdp
On Fri 15-11-19 11:32:40, Hillf Danton wrote:
>
> On Thu, 14 Nov 2019 13:17:46 +0100 Jan Kara wrote:
> >
> > On Sat 26-10-19 18:46:56, Hillf Danton wrote:
> > >
> > > The elastic bdi is the mirror bdi of spinning disks, SSD, USB and
> > > other storage devices/instruments on market. The performance of
> > > ebdi goes up and down as the pattern of IO dispatched changes, as
> > > approximately estimated as below.
> > >
> > > P = j(..., IO pattern);
> > >
> > > In ebdi's view, the bandwidth currently measured in balancing dirty
> > > pages has close relation to its performance because the former is a
> > > part of the latter.
> > >
> > > B = y(P);
> > >
> > > The functions above suggest there may be a layer violation if it
> > > could be better measured somewhere below fs.
> > >
> > > It is measured however to the extent that makes every judge happy,
> > > and is playing a role in dispatching IO with the IO pattern entirely
> > > ignored that is volatile in nature.
> > >
> > > And it helps to throttle the dirty speed, with the figure ignored
> > > that DRAM in general is x10 faster than ebdi. If B is half of P for
> > > instance, then it is near 5% of dirty speed, just 2 points from the
> > > figure in the snippet below.
> > >
> > > /*
> > > * If ratelimit_pages is too high then we can get into dirty-data overload
> > > * if a large number of processes all perform writes at the same time.
> > > * If it is too low then SMP machines will call the (expensive)
> > > * get_writeback_state too often.
> > > *
> > > * Here we set ratelimit_pages to a level which ensures that when all CPUs are
> > > * dirtying in parallel, we cannot go more than 3% (1/32) over the dirty memory
> > > * thresholds.
> > > */
> > >
> > > To prevent dirty speed from running away from laundry speed, ebdi
> > > suggests the walk-dog method to put in bdp as a leash seems to
> > > churn less in IO pattern.
> > >
> > > V2 is based on next-20191025.
> >
> > Honestly, the changelog is still pretty incomprehensible as Andrew already
> > mentioned. Also I completely miss there, what are the benefits of this work
> > compared to what we currently have.
> >
> Hey Jan
>
> In the room which has been somewhere between 3% and 5% for bdp since
> 143dfe8611a6 ("writeback: IO-less balance_dirty_pages()") a bdp is
> proposed with target of surviving tests like LTP without regressions
> introduced, so overall the concerned benefit is that bdp is becoming
> more diverse if the diversity under linux/fs is good for the 99%.
What do you mean by "balance_dirty_pages() is becoming more diverse"?
Honza
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists