lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 17 Jun 2024 10:33:36 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Suren Baghdasaryan <surenb@...gle.com>, akpm@...ux-foundation.org
Cc: kent.overstreet@...ux.dev, pasha.tatashin@...een.com,
 souravpanda@...gle.com, keescook@...omium.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] mm: handle profiling for fake memory allocations
 during compaction

On 6/15/24 1:05 AM, Suren Baghdasaryan wrote:
> During compaction isolated free pages are marked allocated so that they
> can be split and/or freed. For that, post_alloc_hook() is used inside
> split_map_pages() and release_free_list(). split_map_pages() marks free
> pages allocated, splits the pages and then lets alloc_contig_range_noprof()
> free those pages. release_free_list() marks free pages and immediately

Well in case of split_map_pages() only some of them end up freed, but most
should be used as migration targets. But we move the tags from the source
page during migration and unaccount the ones from the target (i.e. from the
instrumented post_alloc_hook() after this patch), right? So it should be ok,
just the description here is incomplete.

> frees them. This usage of post_alloc_hook() affect memory allocation
> profiling because these functions might not be called from an instrumented
> allocator, therefore current->alloc_tag is NULL and when debugging is
> enabled (CONFIG_MEM_ALLOC_PROFILING_DEBUG=y) that causes warnings.
> To avoid that, wrap such post_alloc_hook() calls into an instrumented
> function which acts as an allocator which will be charged for these
> fake allocations. Note that these allocations are very short lived until
> they are freed, therefore the associated counters should usually read 0.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>

Acked-by: Vlastimil Babka <vbabka@...e.cz>

> ---
>  mm/compaction.c | 11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index e731d45befc7..739b1bf3d637 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -79,6 +79,13 @@ static inline bool is_via_compact_memory(int order) { return false; }
>  #define COMPACTION_HPAGE_ORDER	(PMD_SHIFT - PAGE_SHIFT)
>  #endif
>  
> +static struct page *mark_allocated_noprof(struct page *page, unsigned int order, gfp_t gfp_flags)
> +{
> +	post_alloc_hook(page, order, __GFP_MOVABLE);
> +	return page;
> +}
> +#define mark_allocated(...)	alloc_hooks(mark_allocated_noprof(__VA_ARGS__))
> +
>  static void split_map_pages(struct list_head *freepages)
>  {
>  	unsigned int i, order;
> @@ -93,7 +100,7 @@ static void split_map_pages(struct list_head *freepages)
>  
>  			nr_pages = 1 << order;
>  
> -			post_alloc_hook(page, order, __GFP_MOVABLE);
> +			mark_allocated(page, order, __GFP_MOVABLE);
>  			if (order)
>  				split_page(page, order);
>  
> @@ -122,7 +129,7 @@ static unsigned long release_free_list(struct list_head *freepages)
>  			 * Convert free pages into post allocation pages, so
>  			 * that we can free them via __free_page.
>  			 */
> -			post_alloc_hook(page, order, __GFP_MOVABLE);
> +			mark_allocated(page, order, __GFP_MOVABLE);
>  			__free_pages(page, order);
>  			if (pfn > high_pfn)
>  				high_pfn = pfn;
> 
> base-commit: c286c21ff94252f778515b21b6bebe749454a852


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ