lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 5 Jun 2022 20:38:45 +0100 From: "Matthew Wilcox (Oracle)" <willy@...radead.org> To: linux-fsdevel@...r.kernel.org Cc: "Matthew Wilcox (Oracle)" <willy@...radead.org>, linux-kernel@...r.kernel.org, linux-ext4@...r.kernel.org, linux-f2fs-devel@...ts.sourceforge.net, linux-mm@...ck.org, linux-nilfs@...r.kernel.org Subject: [PATCH 01/10] filemap: Add filemap_get_folios() This is the equivalent of find_get_pages() but fills a folio_batch instead of an array of pages. Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org> --- include/linux/pagemap.h | 2 ++ mm/filemap.c | 55 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 57 insertions(+) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 5555689ea809..50e57b2d845f 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -718,6 +718,8 @@ static inline struct page *find_subpage(struct page *head, pgoff_t index) return head + (index & (thp_nr_pages(head) - 1)); } +unsigned filemap_get_folios(struct address_space *mapping, pgoff_t *start, + pgoff_t end, struct folio_batch *fbatch); unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start, pgoff_t end, unsigned int nr_pages, struct page **pages); diff --git a/mm/filemap.c b/mm/filemap.c index 1e66eea98a7e..ea4145b7a84c 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2127,6 +2127,61 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, return folio_batch_count(fbatch); } +/** + * filemap_get_folios - Get a batch of folios + * @mapping: The address_space to search + * @start: The starting page index + * @end: The final page index (inclusive) + * @fbatch: The batch to fill. + * + * Search for and return a batch of folios in the mapping starting at + * index @start and up to index @end (inclusive). The folios are returned + * in @fbatch with an elevated reference count. + * + * The first folio may start before @start; if it does, it will contain + * @start. The final folio may extend beyond @end; if it does, it will + * contain @end. The folios have ascending indices. There may be gaps + * between the folios if there are indices which have no folio in the + * page cache. If folios are added to or removed from the page cache + * while this is running, they may or may not be found by this call. + * + * Return: The number of folios which were found. + * We also update @start to index the next folio for the traversal. + */ +unsigned filemap_get_folios(struct address_space *mapping, pgoff_t *start, + pgoff_t end, struct folio_batch *fbatch) +{ + XA_STATE(xas, &mapping->i_pages, *start); + struct folio *folio; + + rcu_read_lock(); + while ((folio = find_get_entry(&xas, end, XA_PRESENT)) != NULL) { + /* Skip over shadow, swap and DAX entries */ + if (xa_is_value(folio)) + continue; + if (!folio_batch_add(fbatch, folio)) { + *start = folio->index + folio_nr_pages(folio); + goto out; + } + } + + /* + * We come here when there is no page beyond @end. We take care to not + * overflow the index @start as it confuses some of the callers. This + * breaks the iteration when there is a page at index -1 but that is + * already broken anyway. + */ + if (end == (pgoff_t)-1) + *start = (pgoff_t)-1; + else + *start = end + 1; +out: + rcu_read_unlock(); + + return folio_batch_count(fbatch); +} +EXPORT_SYMBOL(filemap_get_folios); + static inline bool folio_more_pages(struct folio *folio, pgoff_t index, pgoff_t max) { -- 2.35.1
Powered by blists - more mailing lists