[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5c5753b4-8cb9-fa02-a0fa-d5ca22731cbb@redhat.com>
Date: Thu, 10 Sep 2020 12:29:24 +0200
From: David Hildenbrand <david@...hat.com>
To: Vlastimil Babka <vbabka@...e.cz>, Michal Hocko <mhocko@...e.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Pavel Tatashin <pasha.tatashin@...een.com>,
Oscar Salvador <osalvador@...e.de>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [RFC 5/5] mm, page_alloc: disable pcplists during page isolation
On 09.09.20 13:55, Vlastimil Babka wrote:
> On 9/9/20 1:36 PM, Michal Hocko wrote:
>> On Wed 09-09-20 12:48:54, Vlastimil Babka wrote:
>>> Here's a version that will apply on top of next-20200908. The first 4 patches need no change.
>>>
>>> ----8<----
>>> >From 8febc17272b8e8b378e2e5ea5e76b2616f029c5b Mon Sep 17 00:00:00 2001
>>> From: Vlastimil Babka <vbabka@...e.cz>
>>> Date: Mon, 7 Sep 2020 17:20:39 +0200
>>> Subject: [PATCH] mm, page_alloc: disable pcplists during page isolation
>>>
>>> Page isolation can race with process freeing pages to pcplists in a way that
>>> a page from isolated pageblock can end up on pcplist. This can be fixed by
>>> repeated draining of pcplists, as done by patch "mm/memory_hotplug: drain
>>> per-cpu pages again during memory offline" in [1].
>>>
>>> David and Michal would prefer that this race was closed in a way that callers
>>> of page isolation don't need to care about drain. David suggested disabling
>>> pcplists usage completely during page isolation, instead of repeatedly draining
>>> them.
>>>
>>> To achieve this without adding special cases in alloc/free fastpath, we can use
>>> the same 'trick' as boot pagesets - when pcp->high is 0, any pcplist addition
>>> will be immediately flushed.
>>>
>>> The race can thus be closed by setting pcp->high to 0 and draining pcplists
>>> once in start_isolate_page_range(). The draining will serialize after processes
>>> that already disabled interrupts and read the old value of pcp->high in
>>> free_unref_page_commit(), and processes that have not yet disabled interrupts,
>>> will observe pcp->high == 0 when they are rescheduled, and skip pcplists.
>>> This guarantees no stray pages on pcplists in zones where isolation happens.
>>>
>>> We can use the variable zone->nr_isolate_pageblock (protected by zone->lock)
>>> to detect transitions from 0 to 1 (to change pcp->high to 0 and issue drain)
>>> and from 1 to 0 (to restore original pcp->high and batch values cached in
>>> struct zone). We have to avoid external updates to high and batch by taking
>>> pcp_batch_high_lock. To allow multiple isolations in parallel, change this
>>> lock from mutex to rwsem.
>>>
>>> For callers that pair start_isolate_page_range() with
>>> undo_isolated_page_range() properly, this is transparent. Currently that's
>>> alloc_contig_range(). __offline_pages() doesn't call undo_isolated_page_range()
>>> in the succes case, so it has to be carful to handle restoring pcp->high and batch
>>> and unlocking pcp_batch_high_lock.
>>
>> I was hoping that it would be possible to have this completely hidden
>> inside start_isolate_page_range code path.
>
> I hoped so too, but we can't know the moment when all processes that were in the
> critical part of freeing pages to pcplists have moved on (they might have been
> rescheduled).
> We could change free_unref_page() to disable IRQs sooner, before
> free_unref_page_prepare(), or at least the get_pfnblock_migratetype() part. Then
> after the single drain, we should be safe, AFAICS?
At least moving it before getting the migratetype should not be that severe?
> RT guys might not be happy though, but it's much simpler than this patch. I
> still like some of the cleanups in 1-4 though tbh :)
It would certainly make this patch much simpler. Do you have a prototype
lying around?
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists