lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <40036244-644a-42e0-a5e4-4838a98b1cbc@redhat.com>
Date: Tue, 11 Feb 2025 16:15:48 +0100
From: David Hildenbrand <david@...hat.com>
To: Luiz Capitulino <luizcap@...hat.com>, linux-kernel@...r.kernel.org,
 linux-mm@...ck.org, yuzhao@...gle.com
Cc: akpm@...ux-foundation.org, hannes@...xchg.org, muchun.song@...ux.dev,
 lcapitulino@...il.com
Subject: Re: [RFC 2/4] mm: page_owner: use new iteration API

On 24.01.25 22:37, Luiz Capitulino wrote:
> The page_ext_next() function assumes that page extension objects for a
> page order allocation always reside in the same memory section, which
> may not be true and could lead to crashes. Use the page_ext_iter API
> instead.
> 
> Fixes: e98337d11bbd ("mm/contig_alloc: support __GFP_COMP")
> Signed-off-by: Luiz Capitulino <luizcap@...hat.com>
> ---

[...]

>   void __folio_copy_owner(struct folio *newfolio, struct folio *old)
> @@ -364,24 +376,26 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old)
>   	int i;
>   	struct page_ext *old_ext;
>   	struct page_ext *new_ext;
> +	struct page_ext_iter old_iter;
> +	struct page_ext_iter new_iter;
>   	struct page_owner *old_page_owner;
>   	struct page_owner *new_page_owner;
>   	depot_stack_handle_t migrate_handle;
>   
> -	old_ext = page_ext_get(&old->page);
> +	old_ext = page_ext_iter_begin(&old_iter, &old->page);
>   	if (unlikely(!old_ext))
>   		return;
>   
> -	new_ext = page_ext_get(&newfolio->page);
> +	new_ext = page_ext_iter_begin(&new_iter, &newfolio->page);
>   	if (unlikely(!new_ext)) {
> -		page_ext_put(old_ext);
> +		page_ext_iter_end(&old_iter);
>   		return;
>   	}
>   
>   	old_page_owner = get_page_owner(old_ext);
>   	new_page_owner = get_page_owner(new_ext);
>   	migrate_handle = new_page_owner->handle;
> -	__update_page_owner_handle(new_ext, old_page_owner->handle,
> +	__update_page_owner_handle(&new_iter, old_page_owner->handle,
>   				   old_page_owner->order, old_page_owner->gfp_mask,
>   				   old_page_owner->last_migrate_reason,
>   				   old_page_owner->ts_nsec, old_page_owner->pid,
> @@ -390,8 +404,13 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old)
>   	 * Do not proactively clear PAGE_EXT_OWNER{_ALLOCATED} bits as the folio
>   	 * will be freed after migration. Keep them until then as they may be
>   	 * useful.
> +	 *
> +	 * Note that we need to re-grab the page_ext iterator since
> +	 * __update_page_owner_handle changed it.
>   	 */
> -	__update_page_owner_free_handle(new_ext, 0, old_page_owner->order,
> +	page_ext_iter_end(&new_iter);
> +	page_ext_iter_begin(&new_iter, &newfolio->page);

So a page_ext_iter_reset() could be helpful, that wouldn't drop the RCU 
lock. With that, we could probably also drop the comment.

> +	__update_page_owner_free_handle(&new_iter, 0, old_page_owner->order,
>   					old_page_owner->free_pid,
>   					old_page_owner->free_tgid,
>   					old_page_owner->free_ts_nsec);
> @@ -402,12 +421,12 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old)
>   	 */
>   	for (i = 0; i < (1 << new_page_owner->order); i++) {
>   		old_page_owner->handle = migrate_handle;
> -		old_ext = page_ext_next(old_ext);
> +		old_ext = page_ext_iter_next(&old_iter);
>   		old_page_owner = get_page_owner(old_ext);
>   	}
>   
> -	page_ext_put(new_ext);
> -	page_ext_put(old_ext);
> +	page_ext_iter_end(&new_iter);
> +	page_ext_iter_end(&old_iter);

In general, we should look into implementing the iterator without 
temporarily dropping the RCU lock I think.

Nothing jumped at me from a quick glimpse, but yes, this usage is not 
that easy.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ