[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221017161800.2003-1-vishal.moola@gmail.com>
Date: Mon, 17 Oct 2022 09:17:58 -0700
From: "Vishal Moola (Oracle)" <vishal.moola@...il.com>
To: akpm@...ux-foundation.org
Cc: willy@...radead.org, hughd@...gle.com,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
"Vishal Moola (Oracle)" <vishal.moola@...il.com>
Subject: [PATCH v3 0/2] Rework find_get_entries() and find_lock_entries()
Originally the callers of find_get_entries() and find_lock_entries()
were keeping track of the start index themselves as
they traverse the search range.
This resulted in hacky code such as in shmem_undo_range():
index = folio->index + folio_nr_pages(folio) - 1;
where the - 1 is only present to stay in the right spot after
incrementing index later. This sort of calculation was also being done
on every folio despite not even using index later within that function.
These patches change find_get_entries() and find_lock_entries() to calculate
the new index instead of leaving it to the callers so we can avoid all
these complications.
---
v3:
Fixed a typo in commit messages
Shifted calculations to after the rcu_read_unlock()
v2:
Fixed an issue when handling shadow entries
Dropped patches removing the indices array; it is required for value
entries
Vishal Moola (Oracle) (2):
filemap: find_lock_entries() now updates start offset
filemap: find_get_entries() now updates start offset
mm/filemap.c | 28 +++++++++++++++++++++++-----
mm/internal.h | 4 ++--
mm/shmem.c | 19 ++++++-------------
mm/truncate.c | 30 ++++++++++--------------------
4 files changed, 41 insertions(+), 40 deletions(-)
--
2.36.1
Powered by blists - more mailing lists