lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <838c6734-1e5d-6a26-8c88-90e89d407482@suse.cz>
Date:   Thu, 15 Apr 2021 14:25:36 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Mel Gorman <mgorman@...hsingularity.net>,
        Linux-MM <linux-mm@...ck.org>,
        Linux-RT-Users <linux-rt-users@...r.kernel.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Chuck Lever <chuck.lever@...cle.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        Michal Hocko <mhocko@...nel.org>
Subject: Re: [PATCH 09/11] mm/page_alloc: Avoid conflating IRQs disabled with
 zone->lock

On 4/14/21 3:39 PM, Mel Gorman wrote:
> Historically when freeing pages, free_one_page() assumed that callers
> had IRQs disabled and the zone->lock could be acquired with spin_lock().
> This confuses the scope of what local_lock_irq is protecting and what
> zone->lock is protecting in free_unref_page_list in particular.
> 
> This patch uses spin_lock_irqsave() for the zone->lock in
> free_one_page() instead of relying on callers to have disabled
> IRQs. free_unref_page_commit() is changed to only deal with PCP pages
> protected by the local lock. free_unref_page_list() then first frees
> isolated pages to the buddy lists with free_one_page() and frees the rest
> of the pages to the PCP via free_unref_page_commit(). The end result
> is that free_one_page() is no longer depending on side-effects of
> local_lock to be correct.
> 
> Note that this may incur a performance penalty while memory hot-remove
> is running but that is not a common operation.
> 
> Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>

Acked-by: Vlastimil Babka <vbabka@...e.cz>

A nit below:

> @@ -3294,6 +3295,7 @@ void free_unref_page_list(struct list_head *list)
>  	struct page *page, *next;
>  	unsigned long flags, pfn;
>  	int batch_count = 0;
> +	int migratetype;
>  
>  	/* Prepare pages for freeing */
>  	list_for_each_entry_safe(page, next, list, lru) {
> @@ -3301,15 +3303,28 @@ void free_unref_page_list(struct list_head *list)
>  		if (!free_unref_page_prepare(page, pfn))
>  			list_del(&page->lru);
>  		set_page_private(page, pfn);

Should probably move this below so we don't set private for pages that then go
through free_one_page()? Doesn't seem to be a bug, just unneccessary.

> +
> +		/*
> +		 * Free isolated pages directly to the allocator, see
> +		 * comment in free_unref_page.
> +		 */
> +		migratetype = get_pcppage_migratetype(page);
> +		if (unlikely(migratetype >= MIGRATE_PCPTYPES)) {
> +			if (unlikely(is_migrate_isolate(migratetype))) {
> +				free_one_page(page_zone(page), page, pfn, 0,
> +							migratetype, FPI_NONE);
> +				list_del(&page->lru);
> +			}
> +		}
>  	}
>  
>  	local_lock_irqsave(&pagesets.lock, flags);
>  	list_for_each_entry_safe(page, next, list, lru) {
> -		unsigned long pfn = page_private(page);
> -
> +		pfn = page_private(page);
>  		set_page_private(page, 0);
> +		migratetype = get_pcppage_migratetype(page);
>  		trace_mm_page_free_batched(page);
> -		free_unref_page_commit(page, pfn);
> +		free_unref_page_commit(page, pfn, migratetype);
>  
>  		/*
>  		 * Guard against excessive IRQ disabled times when we get
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ