lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f75e4d22-925c-3548-e4af-a96994b7c10f@redhat.com>
Date:   Thu, 10 Sep 2020 14:42:01 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Vlastimil Babka <vbabka@...e.cz>, Michal Hocko <mhocko@...e.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Pavel Tatashin <pasha.tatashin@...een.com>,
        Oscar Salvador <osalvador@...e.de>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Mel Gorman <mgorman@...hsingularity.net>
Subject: Re: [RFC 5/5] mm, page_alloc: disable pcplists during page isolation

On 10.09.20 13:05, Vlastimil Babka wrote:
> On 9/10/20 12:29 PM, David Hildenbrand wrote:
>> On 09.09.20 13:55, Vlastimil Babka wrote:
>>> On 9/9/20 1:36 PM, Michal Hocko wrote:
>>>> On Wed 09-09-20 12:48:54, Vlastimil Babka wrote:
>>>>> Here's a version that will apply on top of next-20200908. The first 4 patches need no change.
>>>>> For callers that pair start_isolate_page_range() with
>>>>> undo_isolated_page_range() properly, this is transparent. Currently that's
>>>>> alloc_contig_range(). __offline_pages() doesn't call undo_isolated_page_range()
>>>>> in the succes case, so it has to be carful to handle restoring pcp->high and batch
>>>>> and unlocking pcp_batch_high_lock.
>>>>
>>>> I was hoping that it would be possible to have this completely hidden
>>>> inside start_isolate_page_range code path.
>>>
>>> I hoped so too, but we can't know the moment when all processes that were in the
>>> critical part of freeing pages to pcplists have moved on (they might have been
>>> rescheduled).
>>> We could change free_unref_page() to disable IRQs sooner, before
>>> free_unref_page_prepare(), or at least the get_pfnblock_migratetype() part. Then
>>> after the single drain, we should be safe, AFAICS?
>>
>> At least moving it before getting the migratetype should not be that severe?
>>
>>> RT guys might not be happy though, but it's much simpler than this patch. I
>>> still like some of the cleanups in 1-4 though tbh :)
>>
>> It would certainly make this patch much simpler. Do you have a prototype
>> lying around?
> 
> Below is the critical part, while writing it I realized that there's also
> free_unref_page_list() where it's more ugly.
> 
> So free_unref_page() simply moves the "determine migratetype" part under
> disabled interrupts. For free_unref_page_list() we optimistically determine them
> without disabling interrupts, and with disabled interrupts we check if zone has
> isolated pageblocks and thus we should not trust that value anymore. That's same
> pattern as free_pcppages_bulk uses.
> 
> But unfortunately it means an extra page_zone() plus a test for each page on the
> list :/

That function is mostly called from mm/vmscan.c AFAIKS.

The fist thing free_unref_page_commit() does is doing page_zone().
That's easy to optimize if needed.

> 
> ----8<----
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index defefed79cfb..57e2a341c95c 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3103,18 +3103,21 @@ void mark_free_pages(struct zone *zone)
>  }
>  #endif /* CONFIG_PM */
>  
> -static bool free_unref_page_prepare(struct page *page, unsigned long pfn)
> +static bool free_unref_page_prepare(struct page *page)
>  {
> -	int migratetype;
> -
>  	if (!free_pcp_prepare(page))
>  		return false;
>  
> -	migratetype = get_pfnblock_migratetype(page, pfn);
> -	set_pcppage_migratetype(page, migratetype);
>  	return true;
>  }
>  
> +static inline void free_unref_page_set_migratetype(struct page *page, unsigned long pfn)
> +{
> +	int migratetype	= get_pfnblock_migratetype(page, pfn);
> +
> +	set_pcppage_migratetype(page, migratetype);
> +}
> +
>  static void free_unref_page_commit(struct page *page, unsigned long pfn)
>  {
>  	struct zone *zone = page_zone(page);
> @@ -3156,10 +3159,17 @@ void free_unref_page(struct page *page)
>  	unsigned long flags;
>  	unsigned long pfn = page_to_pfn(page);
>  
> -	if (!free_unref_page_prepare(page, pfn))
> +	if (!free_unref_page_prepare(page))
>  		return;
>  
> +	/*
> +	 * by disabling interrupts before reading pageblock's migratetype,
> +	 * we can guarantee that after changing a pageblock to MIGRATE_ISOLATE
> +	 * and drain_all_pages(), there's nobody who would read the old
> +	 * migratetype and put a page from isoalted pageblock to pcplists
> +	 */
>  	local_irq_save(flags);
> +	free_unref_page_set_migratetype(page, pfn);
>  	free_unref_page_commit(page, pfn);
>  	local_irq_restore(flags);
>  }
> @@ -3176,9 +3186,10 @@ void free_unref_page_list(struct list_head *list)
>  	/* Prepare pages for freeing */
>  	list_for_each_entry_safe(page, next, list, lru) {
>  		pfn = page_to_pfn(page);
> -		if (!free_unref_page_prepare(page, pfn))
> +		if (!free_unref_page_prepare(page))
>  			list_del(&page->lru);
>  		set_page_private(page, pfn);
> +		free_unref_page_set_migratetype(page, pfn);
>  	}
>  
>  	local_irq_save(flags);
> @@ -3187,6 +3198,8 @@ void free_unref_page_list(struct list_head *list)
>  
>  		set_page_private(page, 0);
>  		trace_mm_page_free_batched(page);
> +		if (has_isolate_pageblock(page_zone(page)))
> +			free_unref_page_set_migratetype(page, pfn);
>  		free_unref_page_commit(page, pfn);
>  

FWIW: You can safely add unlikely() for the has_isolate_pageblock(), as
for most other users of that function I was able to spot.

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ