[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK1f24kda+T9JRef7fZz0BEQj8+cabJ-+rG7UOhQZJsj4yExHw@mail.gmail.com>
Date: Tue, 17 Sep 2024 11:35:48 +0800
From: Lance Yang <ioworker0@...il.com>
To: Matthew Wilcox <willy@...radead.org>, Barry Song <baohua@...nel.org>, dev.jain@....com
Cc: akpm@...ux-foundation.org, david@...hat.com, ryan.roberts@....com,
anshuman.khandual@....com, hughd@...gle.com, wangkefeng.wang@...wei.com,
baolin.wang@...ux.alibaba.com, gshan@...hat.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH] mm: Compute mTHP order efficiently
On Mon, Sep 16, 2024 at 9:25 PM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Fri, Sep 13, 2024 at 02:49:02PM +0530, Dev Jain wrote:
> > We use pte_range_none() to determine whether contiguous PTEs are empty
> > for an mTHP allocation. Instead of iterating the while loop for every
> > order, use some information, which is the first set PTE found, from the
> > previous iteration, to eliminate some cases. The key to understanding
> > the correctness of the patch is that the ranges we want to examine
> > form a strictly decreasing sequence of nested intervals.
>
> This is a lot more complicated. Do you have any numbers that indicate
> that it's faster? Yes, it's fewer memory references, but you've gone
> from a simple linear scan that's easy to prefetch to an exponential scan
> that might confuse the prefetchers.
+1
I'm not sure if multiple mthp sizes will be enabled for common cases ;)
If not, this could be a bit more complicated, IMO.
@Barry, could you share whether OPPO typically uses multiple mthp sizes
in their scenarios?
Thanks,
Lance
Powered by blists - more mailing lists