lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 6 Jun 2022 21:40:35 +0100 From: "Matthew Wilcox (Oracle)" <willy@...radead.org> To: linux-fsdevel@...r.kernel.org Cc: "Matthew Wilcox (Oracle)" <willy@...radead.org>, linux-kernel@...r.kernel.org, linux-block@...r.kernel.org, linux-aio@...ck.org, linux-btrfs@...r.kernel.org, linux-ext4@...r.kernel.org, linux-f2fs-devel@...ts.sourceforge.net, cluster-devel@...hat.com, linux-mm@...ck.org, linux-xfs@...r.kernel.org, linux-nfs@...r.kernel.org, linux-ntfs-dev@...ts.sourceforge.net, ocfs2-devel@....oracle.com, linux-mtd@...ts.infradead.org, virtualization@...ts.linux-foundation.org Subject: [PATCH 05/20] mm/migrate: Convert expected_page_refs() to folio_expected_refs() Now that both callers have a folio, convert this function to take a folio & rename it. Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org> --- mm/migrate.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 77b8c662c9ca..e0a593e5b5f9 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -337,13 +337,18 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd) } #endif -static int expected_page_refs(struct address_space *mapping, struct page *page) +static int folio_expected_refs(struct address_space *mapping, + struct folio *folio) { - int expected_count = 1; + int refs = 1; + if (!mapping) + return refs; - if (mapping) - expected_count += compound_nr(page) + page_has_private(page); - return expected_count; + refs += folio_nr_pages(folio); + if (folio_get_private(folio)) + refs++; + + return refs;; } /* @@ -360,7 +365,7 @@ int folio_migrate_mapping(struct address_space *mapping, XA_STATE(xas, &mapping->i_pages, folio_index(folio)); struct zone *oldzone, *newzone; int dirty; - int expected_count = expected_page_refs(mapping, &folio->page) + extra_count; + int expected_count = folio_expected_refs(mapping, folio) + extra_count; long nr = folio_nr_pages(folio); if (!mapping) { @@ -670,7 +675,7 @@ static int __buffer_migrate_folio(struct address_space *mapping, return migrate_page(mapping, &dst->page, &src->page, mode); /* Check whether page does not have extra refs before we do more work */ - expected_count = expected_page_refs(mapping, &src->page); + expected_count = folio_expected_refs(mapping, src); if (folio_ref_count(src) != expected_count) return -EAGAIN; -- 2.35.1
Powered by blists - more mailing lists