[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b17acf5b-5e8a-3edf-5a64-603bf6177312@suse.cz>
Date: Tue, 16 Jun 2020 09:45:38 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Hugh Dickins <hughd@...gle.com>
Cc: Mel Gorman <mgorman@...hsingularity.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Li Wang <liwang@...hat.com>,
Alex Shi <alex.shi@...ux.alibaba.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm, page_alloc: capture page in task context only
On 6/15/20 11:03 PM, Hugh Dickins wrote:
> On Fri, 12 Jun 2020, Vlastimil Babka wrote:
>> > This could presumably be fixed by a barrier() before setting
>> > current->capture_control in compact_zone_order(); but would also need
>> > more care on return from compact_zone(), in order not to risk leaking
>> > a page captured by interrupt just before capture_control is reset.
>>
>> I was hoping a WRITE_ONCE(current->capture_control) would be enough,
>> but apparently it's not (I tried).
>
> Right, I don't think volatiles themselves actually constitute barriers;
> but I'd better keep quiet, I notice the READ_ONCE/WRITE_ONCE/data_race
> industry has been busy recently, and I'm likely out-of-date and mistaken.
Same here, but from what I've read, volatiles should enforce order against other
volatiles, but not non-volatiles (which is the struct initialization). So
barrier() is indeed necessary, and WRITE_ONCE just to prevent (very
hypothetical, hopefully) store tearing.
>>
>> > Maybe that is the preferable fix, but I felt safer for task_capc() to
>> > exclude the rather surprising possibility of capture at interrupt time.
>>
>> > Fixes: 5e1f0f098b46 ("mm, compaction: capture a page under direct compaction")
>> > Cc: stable@...r.kernel.org # 5.1+
>> > Signed-off-by: Hugh Dickins <hughd@...gle.com>
>>
>> Acked-by: Vlastimil Babka <vbabka@...e.cz>
>
> Thanks, and to Mel for his.
>
>>
>> But perhaps I would also make sure that we don't expose the half initialized
>> capture_control and run into this problem again later. It's not like this is a
>> fast path where barriers hurt. Something like this then? (with added comments)
>
> Would it be very rude if I leave that to you and to Mel? to add, or
No problem.
> to replace mine if you wish - go ahead. I can easily see that more
> sophistication at the compact_zone_order() end may be preferable to
> another test and branch inside __free_one_page()
Right, I think so, and will also generally sleep better if we don't put pointers
to unitialized structures to current.
> (and would task_capc()
> be better with an "unlikely" in it?).
I'll try and see if it generates better code. We should be also able to remove
the "capc->cc->direct_compaction" check, as the only place where we set capc is
compact_zone_order() which sets direct_compaction true unconditionally.
> But it seems unnecessary to have a fix at both ends, and I'm rather too
> wound up in other things at the moment, to want to read up on the current
> state of such barriers, and sign off on the Vlastipatch below myself (but
> I do notice that READ_ONCE seems to have more in it today than I remember,
> which probably accounts for why you did not put the barrier() I expected
> to see on the way out).
Right, minimally it's a volatile cast (I've checked 5.1 too, for stable reasons)
which should be enough.
So I'll send the proper patch.
Thanks!
Vlastimil
> Hugh
>
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index fd988b7e5f2b..c89e26817278 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -2316,15 +2316,17 @@ static enum compact_result compact_zone_order(struct zone *zone, int order,
>> .page = NULL,
>> };
>>
>> - current->capture_control = &capc;
>> + barrier();
>> +
>> + WRITE_ONCE(current->capture_control, &capc);
>>
>> ret = compact_zone(&cc, &capc);
>>
>> VM_BUG_ON(!list_empty(&cc.freepages));
>> VM_BUG_ON(!list_empty(&cc.migratepages));
>>
>> - *capture = capc.page;
>> - current->capture_control = NULL;
>> + WRITE_ONCE(current->capture_control, NULL);
>> + *capture = READ_ONCE(capc.page);
>>
>> return ret;
>> }
>
Powered by blists - more mailing lists