[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <164568131640.25116.884631856219777713@noble.neil.brown.name>
Date: Thu, 24 Feb 2022 16:41:56 +1100
From: "NeilBrown" <neilb@...e.de>
To: "Jeff Layton" <jlayton@...nel.org>
Cc: "Andrew Morton" <akpm@...ux-foundation.org>,
"Jan Kara" <jack@...e.cz>, "Wu Fengguang" <fengguang.wu@...el.com>,
"Jaegeuk Kim" <jaegeuk@...nel.org>, "Chao Yu" <chao@...nel.org>,
"Ilya Dryomov" <idryomov@...il.com>,
"Miklos Szeredi" <miklos@...redi.hu>,
"Trond Myklebust" <trond.myklebust@...merspace.com>,
"Anna Schumaker" <anna.schumaker@...app.com>,
"Ryusuke Konishi" <konishi.ryusuke@...il.com>,
"Darrick J. Wong" <djwong@...nel.org>,
"Philipp Reisner" <philipp.reisner@...bit.com>,
"Lars Ellenberg" <lars.ellenberg@...bit.com>,
"Paolo Valente" <paolo.valente@...aro.org>,
"Jens Axboe" <axboe@...nel.dk>, linux-doc@...r.kernel.org,
linux-mm@...ck.org, linux-nilfs@...r.kernel.org,
linux-nfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net, linux-ext4@...r.kernel.org,
ceph-devel@...r.kernel.org, drbd-dev@...ts.linbit.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 06/11] ceph: remove reliance on bdi congestion
On Thu, 24 Feb 2022, Jeff Layton wrote:
> On Tue, 2022-02-22 at 14:17 +1100, NeilBrown wrote:
> > The bdi congestion tracking in not widely used and will be removed.
> >
> > CEPHfs is one of a small number of filesystems that uses it, setting
> > just the async (write) congestion flags at what it determines are
> > appropriate times.
> >
> > The only remaining effect of the async flag is to cause (some)
> > WB_SYNC_NONE writes to be skipped.
> >
> > So instead of setting the flag, set an internal flag and change:
> > - .writepages to do nothing if WB_SYNC_NONE and the flag is set
> > - .writepage to return AOP_WRITEPAGE_ACTIVATE if WB_SYNC_NONE
> > and the flag is set.
> >
> > The writepages change causes a behavioural change in that pageout() can
> > now return PAGE_ACTIVATE instead of PAGE_KEEP, so SetPageActive() will
> > be called on the page which (I think) wil further delay the next attempt
> > at writeout. This might be a good thing.
> >
> > Signed-off-by: NeilBrown <neilb@...e.de>
>
> Maybe. I have to wonder whether all of this is really useful.
>
> When things are congested we'll avoid trying to issue new writeback
> requests. Note that we don't prevent new pages from being dirtied here -
> - only their being written back.
>
> This also doesn't do anything in the DIO or sync_write cases, so if we
> lose caps or are doing DIO, we'll just keep churning out "unlimited"
> writes in those cases anyway.
I think the point of congestion tracking is to differentiate between
sync and async IO. Or maybe "required" and "optional".
Eventually the "optional" IO will become required, but if we can delay
it until a time when there is less "required" io, then maybe we can
improve perceived latency.
"optional" IO here is write-back and read-ahead. If the load of
"required" IO is bursty, and if we can shuffle that optional stuff into
the quiet periods, we might win.
Whether this is a real need is an important question that I don't have an
answer for. And whether it is better to leave delayed requests in the
page cache, or in the low-level queue with sync requests able to
over-take them - I don't know. If you have multiple low-level queue as
you say you can with ceph, then lower might be better.
The block layer has REQ_RAHEAD .. maybe those request get should get a
lower priority ... though I don't think they do.
NFS has a 3 level priority queue, with write-back going at a lower
priority ... I think... for NFSv3 at least.
Sometimes I suspect that as all our transports have become faster, we
have been able to ignore the extra latency caused by poor scheduling of
optional requests. But at other times when my recently upgraded desktop
is struggling to view a web page while compiling a kernel ... I wonder
if maybe we don't have the balance right any more.
So maybe you are right - maybe we can rip all this stuff out.
Or maybe not.
Thanks,
NeilBrown
Powered by blists - more mailing lists