[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221011215634.478330-1-vishal.moola@gmail.com>
Date: Tue, 11 Oct 2022 14:56:30 -0700
From: "Vishal Moola (Oracle)" <vishal.moola@...il.com>
To: akpm@...ux-foundation.org
Cc: willy@...radead.org, hughd@...gle.com,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
"Vishal Moola (Oracle)" <vishal.moola@...il.com>
Subject: [PATCH 0/4] Rework find_get_entries() and find_lock_entries()
Originally the callers of find_get_entries() and find_lock_entries()
were keeping track of the start index themselves as
they traverse the search range range.
This resulted in hacky code such as in shmem_undo_range():
index = folio->index + folio_nr_pages(folio) - 1;
where the - 1 is only present to stay in the right spot after
incrementing index later. This sort of calculation was also being done
on every folio despite not even using index later within that function.
The first two patches change find_get_entries() and find_lock_entries()
to calculate the new index instead of leaving it to the callers so we can
avoid all these complications.
Furthermore, the indices array is almost exclusively used for the
calculations of index mentioned above. Now that those calculations are
no longer occuring, the indices array serves no purpose aside from
tracking the xarray index of a folio which is also no longer needed.
Each folio already keeps track of its index and can be accessed using
folio->index instead.
The last 2 patches remove the indices arrays from the calling functions:
truncate_inode_pages_range(), invalidate_inode_pages2_range(),
invalidate_mapping_pagevec(), and shmem_undo_range().
Vishal Moola (Oracle) (4):
filemap: find_lock_entries() now updates start offset
filemap: find_get_entries() now updates start offset
truncate: Remove indices argument from
truncate_folio_batch_exceptionals()
filemap: Remove indices argument from find_lock_entries() and
find_get_entries()
mm/filemap.c | 40 ++++++++++++++++++++++++++++-----------
mm/internal.h | 8 ++++----
mm/shmem.c | 23 +++++++----------------
mm/truncate.c | 52 +++++++++++++++++++--------------------------------
4 files changed, 59 insertions(+), 64 deletions(-)
--
2.36.1
Powered by blists - more mailing lists