[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200501210518.DA161B7E@viggo.jf.intel.com>
Date: Fri, 01 May 2020 14:05:18 -0700
From: Dave Hansen <dave.hansen@...ux.intel.com>
To: linux-kernel@...r.kernel.org
Cc: Dave Hansen <dave.hansen@...ux.intel.com>, npiggin@...il.com,
akpm@...ux-foundation.org, willy@...radead.org,
yang.shi@...ux.alibaba.com, linux-mm@...ck.org
Subject: [RFC][PATCH 1/2] mm/migrate: remove extra page_count() check
From: Dave Hansen <dave.hansen@...ux.intel.com>
This is not a bug fix. It was found by inspection, but I believe
that it is confusing as it stands.
First, page_ref_freeze() is implemented internally with:
atomic_cmpxchg(&page->_refcount, expected, 0) == expected
The "cmp" part of cmpxchg is making sure that _refcount==expected
which means that there's an implicit check here, equivalent to:
page_count(page) == expected_count
This appears to have originated in "e286781: mm: speculative page
references", which is pretty ancient. This check is also somewhat
dangerous to have here because it might lead someone to think that
page_ref_freeze() *doesn't* do its own page_count() checking.
Remove the unnecessary check.
Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Nicholas Piggin <npiggin@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
Cc: Yang Shi <yang.shi@...ux.alibaba.com>
Cc: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org
---
b/mm/migrate.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff -puN mm/migrate.c~remove_extra_page_count_check mm/migrate.c
--- a/mm/migrate.c~remove_extra_page_count_check 2020-05-01 14:00:42.331525924 -0700
+++ b/mm/migrate.c 2020-05-01 14:00:42.336525924 -0700
@@ -425,11 +425,12 @@ int migrate_page_move_mapping(struct add
newzone = page_zone(newpage);
xas_lock_irq(&xas);
- if (page_count(page) != expected_count || xas_load(&xas) != page) {
+ if (xas_load(&xas) != page) {
xas_unlock_irq(&xas);
return -EAGAIN;
}
+ /* Freezing will fail if page_count()!=expected_count */
if (!page_ref_freeze(page, expected_count)) {
xas_unlock_irq(&xas);
return -EAGAIN;
_
Powered by blists - more mailing lists