[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <jpuz2xprvhklazsziqofy6y66pjxy5eypj3pcypmkp6c2xkmpt@bblq4q5w7l7h>
Date: Fri, 13 Jun 2025 10:58:15 +0200
From: "Pankaj Raghav (Samsung)" <kernel@...kajraghav.com>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Pankaj Raghav <p.raghav@...sung.com>,
Suren Baghdasaryan <surenb@...gle.com>, Ryan Roberts <ryan.roberts@....com>,
Mike Rapoport <rppt@...nel.org>, Michal Hocko <mhocko@...e.com>,
Thomas Gleixner <tglx@...utronix.de>, Nico Pache <npache@...hat.com>, Dev Jain <dev.jain@....com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>, Borislav Petkov <bp@...en8.de>, Ingo Molnar <mingo@...hat.com>,
"H . Peter Anvin" <hpa@...or.com>, Vlastimil Babka <vbabka@...e.cz>, Zi Yan <ziy@...dia.com>,
Dave Hansen <dave.hansen@...ux.intel.com>, David Hildenbrand <david@...hat.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Andrew Morton <akpm@...ux-foundation.org>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>, Jens Axboe <axboe@...nel.dk>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, willy@...radead.org, x86@...nel.org, linux-block@...r.kernel.org,
linux-fsdevel@...r.kernel.org, "Darrick J . Wong" <djwong@...nel.org>, mcgrof@...nel.org,
gost.dev@...sung.com, hch@....de
Subject: Re: [PATCH 0/5] add STATIC_PMD_ZERO_PAGE config option
On Thu, Jun 12, 2025 at 02:46:34PM -0700, Dave Hansen wrote:
> On 6/12/25 13:36, Pankaj Raghav (Samsung) wrote:
> > On Thu, Jun 12, 2025 at 06:50:07AM -0700, Dave Hansen wrote:
> >> On 6/12/25 03:50, Pankaj Raghav wrote:
> >>> But to use huge_zero_folio, we need to pass a mm struct and the
> >>> put_folio needs to be called in the destructor. This makes sense for
> >>> systems that have memory constraints but for bigger servers, it does not
> >>> matter if the PMD size is reasonable (like in x86).
> >>
> >> So, what's the problem with calling a destructor?
> >>
> >> In your last patch, surely bio_add_folio() can put the page/folio when
> >> it's done. Is the real problem that you don't want to call zero page
> >> specific code at bio teardown?
> >
> > Yeah, it feels like a lot of code on the caller just to use a zero page.
> > It would be nice just to have a call similar to ZERO_PAGE() in these
> > subsystems where we can have guarantee of getting huge zero page.
> >
> > Apart from that, these are the following problems if we use
> > mm_get_huge_zero_folio() at the moment:
> >
> > - We might end up allocating 512MB PMD on ARM systems with 64k base page
> > size, which is undesirable. With the patch series posted, we will only
> > enable the static huge page for sane architectures and page sizes.
>
> Does *anybody* want the 512MB huge zero page? Maybe it should be an
> opt-in at runtime or something.
>
Yeah, I think that needs to be fixed. David also pointed this out in one
of his earlier reviews[1].
> > - In the current implementation we always call mm_put_huge_zero_folio()
> > in __mmput()[1]. I am not sure if model will work for all subsystems. For
> > example bio completions can be async, i.e, we might need a reference
> > to the zero page even if the process is no longer alive.
>
> The mm is a nice convenient place to stick an mm but there are other
> ways to keep an efficient refcount around. For instance, you could just
> bump a per-cpu refcount and then have the shrinker sum up all the
> refcounts to see if there are any outstanding on the system as a whole.
>
> I understand that the current refcounts are tied to an mm, but you could
> either replace the mm-specific ones or add something in parallel for
> when there's no mm.
But the whole idea of allocating a static PMD page for sane
architectures like x86 started with the intent of avoiding the refcounts and
shrinker.
This was the initial feedback I got[2]:
I mean, the whole thing about dynamically allocating/freeing it was for
memory-constrained systems. For large systems, we just don't care.
[1] https://lore.kernel.org/linux-mm/1e571419-9709-4898-9349-3d2eef0f8709@redhat.com/
[2] https://lore.kernel.org/linux-mm/cb52312d-348b-49d5-b0d7-0613fb38a558@redhat.com/
--
Pankaj
Powered by blists - more mailing lists