[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200422150256.23473-1-jack@suse.cz>
Date: Wed, 22 Apr 2020 17:02:33 +0200
From: Jan Kara <jack@...e.cz>
To: Matthew Wilcox <willy@...radead.org>
Cc: <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>, Jan Kara <jack@...e.cz>
Subject: [PATCH 0/23 v2] mm: Speedup page cache truncation
Hello,
this is a second version of my patches to avoid clearing marks from xas_store()
and thus fix regression in page cache truncation.
Changes since v1
- rebased on 5.7-rc2
- drop xas_for_each_marked() fix as it was already merged
- reworked the whole series based on Matthew's feedback - we now create new
function xas_store_noinit() and use it instead of changing xas_store()
behavior. Note that for xas_store_range() and __xa_cmpxchg() I didn't bother
to change names although they stop clearing marks as well. This is because
there are only very few callers so it's easy to verify them, also chances of
a clash with other patch introducing new callers are very small.
Original motivation:
Conversion of page cache to xarray (commit 69b6c1319b6 "mm: Convert truncate to
XArray" in particular) has regressed performance of page cache truncation
by about 10% (see my original report here [1]). This patch series aims at
improving the truncation to get some of that regression back.
The first patch fixes a long standing bug with xas_for_each_marked() that I've
uncovered when debugging my patches. The remaining patches then work towards
the ability to stop clearing marks in xas_store() which improves truncation
performance by about 6%.
The patches have passed radix_tree tests in tools/testing and also fstests runs
for ext4 & xfs.
Honza
[1] https://lore.kernel.org/linux-mm/20190226165628.GB24711@quack2.suse.cz
Powered by blists - more mailing lists