lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <38e19e3a-e46b-4a50-8a34-dc04fc4a3c3c@lucifer.local>
Date: Tue, 1 Jul 2025 11:50:20 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        linux-doc@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
        virtualization@...ts.linux.dev, linux-fsdevel@...r.kernel.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jonathan Corbet <corbet@....net>,
        Madhavan Srinivasan <maddy@...ux.ibm.com>,
        Michael Ellerman <mpe@...erman.id.au>,
        Nicholas Piggin <npiggin@...il.com>,
        Christophe Leroy <christophe.leroy@...roup.eu>,
        Jerrin Shaji George <jerrin.shaji-george@...adcom.com>,
        Arnd Bergmann <arnd@...db.de>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Jason Wang <jasowang@...hat.com>,
        Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
        Eugenio Pérez <eperezma@...hat.com>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Christian Brauner <brauner@...nel.org>, Jan Kara <jack@...e.cz>,
        Zi Yan <ziy@...dia.com>, Matthew Brost <matthew.brost@...el.com>,
        Joshua Hahn <joshua.hahnjy@...il.com>, Rakie Kim <rakie.kim@...com>,
        Byungchul Park <byungchul@...com>, Gregory Price <gourry@...rry.net>,
        Ying Huang <ying.huang@...ux.alibaba.com>,
        Alistair Popple <apopple@...dia.com>,
        "Liam R. Howlett" <Liam.Howlett@...cle.com>,
        Vlastimil Babka <vbabka@...e.cz>, Mike Rapoport <rppt@...nel.org>,
        Suren Baghdasaryan <surenb@...gle.com>, Michal Hocko <mhocko@...e.com>,
        "Matthew Wilcox (Oracle)" <willy@...radead.org>,
        Minchan Kim <minchan@...nel.org>,
        Sergey Senozhatsky <senozhatsky@...omium.org>,
        Brendan Jackman <jackmanb@...gle.com>,
        Johannes Weiner <hannes@...xchg.org>, Jason Gunthorpe <jgg@...pe.ca>,
        John Hubbard <jhubbard@...dia.com>, Peter Xu <peterx@...hat.com>,
        Xu Xin <xu.xin16@....com.cn>,
        Chengming Zhou <chengming.zhou@...ux.dev>,
        Miaohe Lin <linmiaohe@...wei.com>,
        Naoya Horiguchi <nao.horiguchi@...il.com>,
        Oscar Salvador <osalvador@...e.de>, Rik van Riel <riel@...riel.com>,
        Harry Yoo <harry.yoo@...cle.com>,
        Qi Zheng <zhengqi.arch@...edance.com>,
        Shakeel Butt <shakeel.butt@...ux.dev>
Subject: Re: [PATCH v1 15/29] mm/migration: remove PageMovable()

On Mon, Jun 30, 2025 at 02:59:56PM +0200, David Hildenbrand wrote:
> As __ClearPageMovable() is gone that would have only made
> PageMovable()==false but still __PageMovable()==true, now
> PageMovable() == __PageMovable().

I think this could be rephrased to be clearer, something like:

	Previously, if __ClearPageMovable() were invoked on a page, this would
	cause __PageMovable() to return false, but due to the continued
	existance of page movable ops, PageMovable() would have returned true.

	With __ClearPageMovable() gone, the two are exactly equivalent.

>
> So we can replace PageMovable() checks by __PageMovable(). In fact,
> __PageMovable() cannot change until a page is freed, so we can turn
> some PageMovable() into sanity checks for __PageMovable().

Deferring the clear does seem to simplify things!

>
> Reviewed-by: Zi Yan <ziy@...dia.com>
> Signed-off-by: David Hildenbrand <david@...hat.com>

LGTM, so:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>

> ---
>  include/linux/migrate.h |  2 --
>  mm/compaction.c         | 15 ---------------
>  mm/migrate.c            | 18 ++++++++++--------
>  3 files changed, 10 insertions(+), 25 deletions(-)
>
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 6eeda8eb1e0d8..25659a685e2aa 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -104,10 +104,8 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
>  #endif /* CONFIG_MIGRATION */
>
>  #ifdef CONFIG_COMPACTION
> -bool PageMovable(struct page *page);
>  void __SetPageMovable(struct page *page, const struct movable_operations *ops);
>  #else
> -static inline bool PageMovable(struct page *page) { return false; }
>  static inline void __SetPageMovable(struct page *page,
>  		const struct movable_operations *ops)
>  {
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 889ec696ba96a..5c37373017014 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -114,21 +114,6 @@ static unsigned long release_free_list(struct list_head *freepages)
>  }
>
>  #ifdef CONFIG_COMPACTION
> -bool PageMovable(struct page *page)
> -{
> -	const struct movable_operations *mops;
> -
> -	VM_BUG_ON_PAGE(!PageLocked(page), page);
> -	if (!__PageMovable(page))
> -		return false;
> -
> -	mops = page_movable_ops(page);
> -	if (mops)
> -		return true;
> -
> -	return false;
> -}
> -
>  void __SetPageMovable(struct page *page, const struct movable_operations *mops)
>  {
>  	VM_BUG_ON_PAGE(!PageLocked(page), page);
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 22c115710d0e2..040484230aebc 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -87,9 +87,12 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
>  		goto out;
>
>  	/*
> -	 * Check movable flag before taking the page lock because
> +	 * Check for movable_ops pages before taking the page lock because
>  	 * we use non-atomic bitops on newly allocated page flags so
>  	 * unconditionally grabbing the lock ruins page's owner side.
> +	 *
> +	 * Note that once a page has movable_ops, it will stay that way
> +	 * until the page was freed.
>  	 */
>  	if (unlikely(!__PageMovable(page)))
>  		goto out_putfolio;
> @@ -108,7 +111,8 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
>  	if (unlikely(!folio_trylock(folio)))
>  		goto out_putfolio;
>
> -	if (!PageMovable(page) || PageIsolated(page))
> +	VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page);
> +	if (PageIsolated(page))
>  		goto out_no_isolated;
>
>  	mops = page_movable_ops(page);
> @@ -149,11 +153,10 @@ static void putback_movable_ops_page(struct page *page)
>  	 */
>  	struct folio *folio = page_folio(page);
>
> +	VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page);
>  	VM_WARN_ON_ONCE_PAGE(!PageIsolated(page), page);
>  	folio_lock(folio);
> -	/* If the page was released by it's owner, there is nothing to do. */
> -	if (PageMovable(page))
> -		page_movable_ops(page)->putback_page(page);
> +	page_movable_ops(page)->putback_page(page);
>  	ClearPageIsolated(page);
>  	folio_unlock(folio);
>  	folio_put(folio);
> @@ -189,10 +192,9 @@ static int migrate_movable_ops_page(struct page *dst, struct page *src,
>  {
>  	int rc = MIGRATEPAGE_SUCCESS;
>
> +	VM_WARN_ON_ONCE_PAGE(!__PageMovable(src), src);
>  	VM_WARN_ON_ONCE_PAGE(!PageIsolated(src), src);
> -	/* If the page was released by it's owner, there is nothing to do. */
> -	if (PageMovable(src))
> -		rc = page_movable_ops(src)->migrate_page(dst, src, mode);
> +	rc = page_movable_ops(src)->migrate_page(dst, src, mode);
>  	if (rc == MIGRATEPAGE_SUCCESS)
>  		ClearPageIsolated(src);
>  	return rc;
> --
> 2.49.0
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ