[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250220122048.GA8305@system.software.com>
Date: Thu, 20 Feb 2025 21:20:48 +0900
From: Byungchul Park <byungchul@...com>
To: Hillf Danton <hdanton@...a.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
kernel_team@...ynix.com
Subject: Re: [RFC PATCH v12 00/26] LUF(Lazy Unmap Flush) reducing tlb numbers
over 90%
On Thu, Feb 20, 2025 at 07:49:19PM +0800, Hillf Danton wrote:
> On Thu, 20 Feb 2025 20:09:35 +0900 Byungchul Park wrote:
> > On Thu, Feb 20, 2025 at 06:32:22PM +0800, Hillf Danton wrote:
> > > On Thu, 20 Feb 2025 14:20:01 +0900 Byungchul Park <byungchul@...com>
> > > > To check luf's stability, I ran a heavy LLM inference workload consuming
> > > > 210GiB over 7 days on a machine with 140GiB memory, and decided it's
> > > > stable enough.
> > > >
> > > > I'm posting the latest version so that anyone can try luf mechanism if
> > > > wanted by any chance. However, I tagged RFC again because there are
> > > > still issues that should be resolved to merge to mainline:
> > > >
> > > > 1. Even though system wide total cpu time for TLB shootdown is
> > > > reduced over 95%, page allocation paths should take additional cpu
> > > > time shifted from page reclaim to perform TLB shootdown.
> > > >
> > > > 2. We need luf debug feature to detect when luf goes wrong by any
> > > > chance. I implemented just a draft version that checks the sanity
> > > > on mkwrite(), kmap(), and so on. I need to gather better ideas
> > > > to improve the debug feature.
> > > >
> > > > ---
> > > >
> > > > Hi everyone,
> > > >
> > > > While I'm working with a tiered memory system e.g. CXL memory, I have
> > > > been facing migration overhead esp. tlb shootdown on promotion or
> > > > demotion between different tiers. Yeah.. most tlb shootdowns on
> > > > migration through hinting fault can be avoided thanks to Huang Ying's
> > > > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> > > > is inaccessible").
> > > >
> > > > However, it's only for migration through hinting fault. I thought it'd
> > > > be much better if we have a general mechanism to reduce all the tlb
> > > > numbers that we can apply to any unmap code, that we normally believe
> > > > tlb flush should be followed.
> > > >
> > > > I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), that defers tlb
> > > > flush until folios that have been unmapped and freed, eventually get
> > > > allocated again. It's safe for folios that had been mapped read-only
> > > > and were unmapped, as long as the contents of the folios don't change
> > > > while staying in pcp or buddy so we can still read the data through the
> > > > stale tlb entries.
> > > >
> > > Given pcp or buddy, you are opening window for use after free which makes
> > > no sense in 99% cases.
> >
> > Just in case that I don't understand what you meant and for better
> > understanding, can you provide a simple and problematic example from
> > the u-a-f?
> >
> Tell us if it is illegal to commit rape without pregnancy in your home town?
Memory overcommit also looked cheating to someone like you. You
definitely think it'd be totally non-sense that each task believes it
can use its own full virtual space.
We say uaf is illegal only when it can cause access the free area
without *appropriate permission*.
> PS defering flushing tlb [1,2] is no go.
I will check this shortly.
Byungchul
>
> Subject: Re: [PATCH v4 29/30] x86/mm, mm/vmalloc: Defer flush_tlb_kernel_range() targeting NOHZ_FULL CPUs
> [1] https://lore.kernel.org/lkml/20250127155146.GB25757@willie-the-truck/
> [2] https://lore.kernel.org/lkml/xhsmhwmdwihte.mognet@vschneid-thinkpadt14sgen2i.remote.csb/
Powered by blists - more mailing lists