[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200622191857.GB21350@casper.infradead.org>
Date: Mon, 22 Jun 2020 20:18:57 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Dave Chinner <david@...morbit.com>
Cc: linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
agruenba@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC] Bypass filesystems for reading cached pages
On Mon, Jun 22, 2020 at 10:32:15AM +1000, Dave Chinner wrote:
> On Fri, Jun 19, 2020 at 08:50:36AM -0700, Matthew Wilcox wrote:
> >
> > This patch lifts the IOCB_CACHED idea expressed by Andreas to the VFS.
> > The advantage of this patch is that we can avoid taking any filesystem
> > lock, as long as the pages being accessed are in the cache (and we don't
> > need to readahead any pages into the cache). We also avoid an indirect
> > function call in these cases.
>
> What does this micro-optimisation actually gain us except for more
> complexity in the IO path?
>
> i.e. if a filesystem lock has such massive overhead that it slows
> down the cached readahead path in production workloads, then that's
> something the filesystem needs to address, not unconditionally
> bypass the filesystem before the IO gets anywhere near it.
You're been talking about adding a range lock to XFS for a while now.
I remain quite sceptical that range locks are a good idea; they have not
worked out well as a replacement for the mmap_sem, although the workload
for the mmap_sem is quite different and they may yet show promise for
the XFS iolock.
There are production workloads that do not work well on top of a single
file on an XFS filesystem. For example, using an XFS file in a host as
the backing store for a guest block device. People tend to work around
that kind of performance bug rather than report it.
Do you agree that the guarantees that XFS currently supplies regarding
locked operation will be maintained if the I/O is contained within a
single page and the mutex is not taken? ie add this check to the original
patch:
if (iocb->ki_pos / PAGE_SIZE !=
(iocb->ki_pos + iov_iter_count(iter) - 1) / PAGE_SIZE)
goto uncached;
I think that gets me almost everything I want. Small I/Os are going to
notice the pain of the mutex more than large I/Os.
Powered by blists - more mailing lists