[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <966a4aff-f587-c4bb-1e10-2673734c2aa0@google.com>
Date: Thu, 24 Oct 2024 22:41:21 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Zi Yan <ziy@...dia.com>
cc: Hugh Dickins <hughd@...gle.com>, Andrew Morton <akpm@...ux-foundation.org>,
Usama Arif <usamaarif642@...il.com>, Yang Shi <shy828301@...il.com>,
Wei Yang <richard.weiyang@...il.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Matthew Wilcox <willy@...radead.org>, David Hildenbrand <david@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Barry Song <baohua@...nel.org>, Kefeng Wang <wangkefeng.wang@...wei.com>,
Ryan Roberts <ryan.roberts@....com>, Nhat Pham <nphamcs@...il.com>,
Chris Li <chrisl@...nel.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH hotfix 1/2] mm/thp: fix deferred split queue not
partially_mapped
On Thu, 24 Oct 2024, Zi Yan wrote:
> On 24 Oct 2024, at 0:10, Hugh Dickins wrote:
>
> > The new unlocked list_del_init() in deferred_split_scan() is buggy.
> > I gave bad advice, it looks plausible since that's a local on-stack
> > list, but the fact is that it can race with a third party freeing or
> > migrating the preceding folio (properly unqueueing it with refcount 0
> > while holding split_queue_lock), thereby corrupting the list linkage.
> >
> > The obvious answer would be to take split_queue_lock there: but it has
> > a long history of contention, so I'm reluctant to add to that. Instead,
> > make sure that there is always one safe (raised refcount) folio before,
> > by delaying its folio_put(). (And of course I was wrong to suggest
> > updating split_queue_len without the lock: leave that until the splice.)
>
> I feel like this is not the right approach, since it breaks the existing
> condition of changing folio->_deferred_list, namely taking
> ds_queue->split_queue_lock for serialization. The contention might not be
> as high as you think, since if a folio were split, the split_queue_lock
> needed to be taken during split anyway. So the worse case is the same
> as all folios are split. Do you see significant perf degradation due to
> taking the lock when doing list_del_init()?
>
> I am afraid if we take this route, we might hit hard-to-debug bugs
> in the future when someone touches the code.
You have a good point: I am adding another element of trickiness
to that already-tricky local-but-not-quite list - which has tripped
us up a few times in the past.
I do still feel that this solution is right in the spirit of that list;
but I've certainly not done any performance measurement to justify it,
nor would I ever trust my skill to do so. I just tried to solve the
corruptions in what I thought was the best way.
(To be honest, I found this solution to the corruptions first, and thought
the bug went back to the original implemention: that its put_page() at the
end of the loop was premature all along. It was only when writing the
commit message two days ago, that I came to realize that even put_page()
or folio_put() would be safely using the lock to unqueue: that it is only
this new list_del_init() which is the exception which introduces the bug.)
Looking at vmstats, I'm coming to believe that the performance advantage
of this way is likely to be in the noise: that mTHPs alone, and the
!partially_mapped case on top, are greatly increasing the split_deferred
stats: and may give rise to renewed complaints of lock contention, with
or without this optimization.
While I still prefer to stick with what's posted and most tested, I am
giving the locked version a run overnight. Thanks a lot for the reviews
and acks everyone: at present Zi Yan is in the minority preferring a
locked version, but please feel free to change your vote if you wish.
Hugh
Powered by blists - more mailing lists