lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250220103223.2360-1-hdanton@sina.com>
Date: Thu, 20 Feb 2025 18:32:22 +0800
From: Hillf Danton <hdanton@...a.com>
To: Byungchul Park <byungchul@...com>
Cc: linux-kernel@...r.kernel.org,
	linux-mm@...ck.org
Subject: Re: [RFC PATCH v12 00/26] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%

On Thu, 20 Feb 2025 14:20:01 +0900 Byungchul Park <byungchul@...com>
> To check luf's stability, I ran a heavy LLM inference workload consuming
> 210GiB over 7 days on a machine with 140GiB memory, and decided it's
> stable enough.
> 
> I'm posting the latest version so that anyone can try luf mechanism if
> wanted by any chance.  However, I tagged RFC again because there are
> still issues that should be resolved to merge to mainline:
> 
>    1. Even though system wide total cpu time for TLB shootdown is
>       reduced over 95%, page allocation paths should take additional cpu
>       time shifted from page reclaim to perform TLB shootdown.
> 
>    2. We need luf debug feature to detect when luf goes wrong by any
>       chance.  I implemented just a draft version that checks the sanity
>       on mkwrite(), kmap(), and so on.  I need to gather better ideas
>       to improve the debug feature.
> 
> ---
> 
> Hi everyone,
> 
> While I'm working with a tiered memory system e.g. CXL memory, I have
> been facing migration overhead esp. tlb shootdown on promotion or
> demotion between different tiers.  Yeah..  most tlb shootdowns on
> migration through hinting fault can be avoided thanks to Huang Ying's
> work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> is inaccessible").
> 
> However, it's only for migration through hinting fault.  I thought it'd
> be much better if we have a general mechanism to reduce all the tlb
> numbers that we can apply to any unmap code, that we normally believe
> tlb flush should be followed.
> 
> I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), that defers tlb
> flush until folios that have been unmapped and freed, eventually get
> allocated again.  It's safe for folios that had been mapped read-only
> and were unmapped, as long as the contents of the folios don't change
> while staying in pcp or buddy so we can still read the data through the
> stale tlb entries.
>
Given pcp or buddy, you are opening window for use after free which makes
no sense in 99% cases.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ