[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2f3a4ab5-dea3-deaf-6f6d-01ac4a5716b2@arm.com>
Date: Thu, 3 Aug 2023 13:48:49 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>,
Yin Fengwei <fengwei.yin@...el.com>,
David Hildenbrand <david@...hat.com>,
Yu Zhao <yuzhao@...gle.com>, Yang Shi <shy828301@...il.com>,
"Huang, Ying" <ying.huang@...el.com>, Zi Yan <ziy@...dia.com>,
Nathan Chancellor <nathan@...nel.org>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v4 0/3] Optimize large folio interaction with deferred
split
On 03/08/2023 13:01, Kirill A. Shutemov wrote:
> On Wed, Aug 02, 2023 at 05:42:23PM +0100, Ryan Roberts wrote:
>> - avoid the split lock contention by using mmu gather (suggested by Kirill)
>
> [Offlist]
>
> So, my idea is to embed struct deferred_split into struct mmu_gather and
> make zap path to use it instead of per-node/per-memcg deferred_split. This
> would avoid lock contention. If the list is not empty after zap, move the
> to the per-node/per-memcg deferred_split.
>
> But it is only relevant if we see lock contention.
>
Thanks Kiryl, I understand the proposal now. Having thought about this over
night, I'm thinking I'll just implement the full batch approach that Yu
proposed. In this case, we will get the benefits of batching rmap removal (for
all folio types) and as a side benefit we will get the lock contention reduction
(if there is lock contention) without the need for the new per-mmu_gather struct
deferred_split. Shout if you have issue with this.
Powered by blists - more mailing lists