[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f8a643a9-4932-9ba4-94f1-4bc88ee27740@google.com>
Date: Wed, 13 Mar 2024 16:32:52 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Will Deacon <will@...nel.org>
cc: Nanyong Sun <sunnanyong@...wei.com>,
Catalin Marinas <catalin.marinas@....com>,
Matthew Wilcox <willy@...radead.org>, muchun.song@...ux.dev,
Andrew Morton <akpm@...ux-foundation.org>, anshuman.khandual@....com,
wangkefeng.wang@...wei.com, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Yu Zhao <yuzhao@...gle.com>, Yosry Ahmed <yosryahmed@...gle.com>,
Sourav Panda <souravpanda@...gle.com>
Subject: Re: [PATCH v3 0/3] A Solution to Re-enable hugetlb vmemmap
optimize
On Thu, 8 Feb 2024, Will Deacon wrote:
> > How about take a new lock with irq disabled during BBM, like:
> >
> > +void vmemmap_update_pte(unsigned long addr, pte_t *ptep, pte_t pte)
> > +{
> > + spin_lock_irq(NEW_LOCK);
> > + pte_clear(&init_mm, addr, ptep);
> > + flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> > + set_pte_at(&init_mm, addr, ptep, pte);
> > + spin_unlock_irq(NEW_LOCK);
> > +}
>
> I really think the only maintainable way to achieve this is to avoid the
> possibility of a fault altogether.
>
> Will
>
>
Nanyong, are you still actively working on making HVO possible on arm64?
This would yield a substantial memory savings on hosts that are largely
configured with hugetlbfs. In our case, the size of this hugetlbfs pool
is actually never changed after boot, but it sounds from the thread that
there was an idea to make HVO conditional on FEAT_BBM. Is this being
pursued?
If so, any testing help needed?
Powered by blists - more mailing lists