[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <E5A75697-55C7-4335-8D86-EE5CB6A99C4F@nvidia.com>
Date: Fri, 25 Oct 2024 11:32:30 -0400
From: Zi Yan <ziy@...dia.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Usama Arif <usamaarif642@...il.com>, Yang Shi <shy828301@...il.com>,
Wei Yang <richard.weiyang@...il.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Matthew Wilcox <willy@...radead.org>, David Hildenbrand <david@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>, Barry Song <baohua@...nel.org>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
Ryan Roberts <ryan.roberts@....com>, Nhat Pham <nphamcs@...il.com>,
Chris Li <chrisl@...nel.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH hotfix 1/2] mm/thp: fix deferred split queue not
partially_mapped
On 25 Oct 2024, at 1:41, Hugh Dickins wrote:
> On Thu, 24 Oct 2024, Zi Yan wrote:
>> On 24 Oct 2024, at 0:10, Hugh Dickins wrote:
>>
>>> The new unlocked list_del_init() in deferred_split_scan() is buggy.
>>> I gave bad advice, it looks plausible since that's a local on-stack
>>> list, but the fact is that it can race with a third party freeing or
>>> migrating the preceding folio (properly unqueueing it with refcount 0
>>> while holding split_queue_lock), thereby corrupting the list linkage.
>>>
>>> The obvious answer would be to take split_queue_lock there: but it has
>>> a long history of contention, so I'm reluctant to add to that. Instead,
>>> make sure that there is always one safe (raised refcount) folio before,
>>> by delaying its folio_put(). (And of course I was wrong to suggest
>>> updating split_queue_len without the lock: leave that until the splice.)
>>
>> I feel like this is not the right approach, since it breaks the existing
>> condition of changing folio->_deferred_list, namely taking
>> ds_queue->split_queue_lock for serialization. The contention might not be
>> as high as you think, since if a folio were split, the split_queue_lock
>> needed to be taken during split anyway. So the worse case is the same
>> as all folios are split. Do you see significant perf degradation due to
>> taking the lock when doing list_del_init()?
>>
>> I am afraid if we take this route, we might hit hard-to-debug bugs
>> in the future when someone touches the code.
>
> You have a good point: I am adding another element of trickiness
> to that already-tricky local-but-not-quite list - which has tripped
> us up a few times in the past.
>
> I do still feel that this solution is right in the spirit of that list;
> but I've certainly not done any performance measurement to justify it,
> nor would I ever trust my skill to do so. I just tried to solve the
> corruptions in what I thought was the best way.
>
> (To be honest, I found this solution to the corruptions first, and thought
> the bug went back to the original implemention: that its put_page() at the
> end of the loop was premature all along. It was only when writing the
> commit message two days ago, that I came to realize that even put_page()
> or folio_put() would be safely using the lock to unqueue: that it is only
> this new list_del_init() which is the exception which introduces the bug.)
>
> Looking at vmstats, I'm coming to believe that the performance advantage
> of this way is likely to be in the noise: that mTHPs alone, and the
> !partially_mapped case on top, are greatly increasing the split_deferred
> stats: and may give rise to renewed complaints of lock contention, with
> or without this optimization.
>
> While I still prefer to stick with what's posted and most tested, I am
> giving the locked version a run overnight. Thanks a lot for the reviews
> and acks everyone: at present Zi Yan is in the minority preferring a
> locked version, but please feel free to change your vote if you wish.
Thank you a lot for taking the time to check the locked version. Looking
forward to the result. BTW, I am not going to block this patch since it
fixes the bug.
The tricky part in deferred_list_scan() is always the use of
folio->_deferred_list without taking split_queue_lock. I am thinking about
use folio_batch to store the out-of-split_queue folios, so that _deferred_list
will not be touched when these folios are tried to be split. Basically,
1. loop through split_queue and move folios to a folio_batch until the
folio_batch is full;
2. loop through the folio_batch to try to split each folio;
3. move the remaining folios back to split_queue.
With this approach, split_queue_lock might be taken more if there are
more than 31 (folio_batch max size) folios on split_queue and split_queue_lock
will be held longer in step 3, since the remaining folios need to be
added back to split_queue one by one instead of a single list splice.
Let me know your thoughts. I can look into this if this approach sounds
promising. Thanks.
Best Regards,
Yan, Zi
Powered by blists - more mailing lists