[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20220510154518.0410c1966c37cfa66cfeeab0@linux-foundation.org>
Date: Tue, 10 May 2022 15:45:18 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Matthew Wilcox <willy@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [GIT PULL] Two folio fixes for 5.18
On Tue, 10 May 2022 23:30:02 +0100 Matthew Wilcox <willy@...radead.org> wrote:
> On Tue, May 10, 2022 at 03:18:09PM -0700, Andrew Morton wrote:
> > On Fri, 6 May 2022 00:43:18 +0100 Matthew Wilcox <willy@...radead.org> wrote:
> >
> > > - Fix readahead creating single-page folios instead of the intended
> > > large folios when doing reads that are not a power of two in size.
> >
> > I worry about the idea of using hugepages in readahead. We're
> > increasing the load on the hugepage allocator, which is already
> > groaning under the load.
>
> Well, hang on. We're not using the hugepage allocator, we're using
> the page allocator. We're also using variable order pages, not
> necessarily PMD_ORDER.
Ah, OK, misapprehended. I guess there remains a fragmentation risk.
> I was under the impression that we were
> using GFP_TRANSHUGE_LIGHT, but I now don't see that. So that might
> be something that needs to be changed.
>
> > The obvious risk is that handing out hugepages to a low-value consumer
> > (copying around pagecache which is only ever accessed via the direct
> > map) will deny their availability to high-value consumers (that
> > compute-intensive task against a large dataset).
> >
> > Has testing and instrumentation been used to demonstrate that this is
> > not actually going to be a problem, or are we at risk of getting
> > unhappy reports?
>
> It's hard to demonstrate that it's definitely not going to cause a
> problem. But I actually believe it will help; by keeping page cache
> memory in larger chunks, we make it easier to defrag memory and create
> PMD-order pages when they're needed.
Obviously it'll be very workload-dependent.
Powered by blists - more mailing lists