lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CAHKB1wLetbLZjhg1UVhA1QwZHo226BRL=Khm962JEfh0F+CVbQ@mail.gmail.com> Date: Wed, 11 Oct 2023 11:17:18 +0200 From: Matteo Rizzo <matteorizzo@...gle.com> To: Dave Hansen <dave.hansen@...el.com> Cc: cl@...ux.com, penberg@...nel.org, rientjes@...gle.com, iamjoonsoo.kim@....com, akpm@...ux-foundation.org, vbabka@...e.cz, roman.gushchin@...ux.dev, 42.hyeyoo@...il.com, keescook@...omium.org, linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org, linux-mm@...ck.org, linux-hardening@...r.kernel.org, tglx@...utronix.de, mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com, corbet@....net, luto@...nel.org, peterz@...radead.org, jannh@...gle.com, evn@...gle.com, poprdi@...gle.com, jordyzomer@...gle.com Subject: Re: [RFC PATCH 11/14] mm/slub: allocate slabs from virtual memory On Fri, 15 Sept 2023 at 23:57, Dave Hansen <dave.hansen@...el.com> wrote: > > I assume that the TLB flushes in the queue are going to be pretty sparse > on average. > > At least on x86, flush_tlb_kernel_range() falls back pretty quickly from > individual address invalidation to just doing a full flush. It might > not even be worth tracking the address ranges, and just do a full flush > every time. > > I'd be really curious to see how often actual ranged flushes are > triggered from this code. I expect it would be very near zero. I did some quick testing with kernel compilation. On x86 flush_tlb_kernel_range does a full flush when end - start is more than 33 pages and a ranged flush otherwise. I counted how many of each we are triggering from the TLB flush worker with some code like this: if (addr_start < addr_end) { if ((addr_end - addr_start) <= (33 << PAGE_SHIFT)) partial_flush_count++; else full_flush_count++; } Result after one run of kernbench: # cat /proc/slab_tlbinfo partial 88890 full 45223 So it seems that most flushes are ranged (at least for this workload). -- Matteo
Powered by blists - more mailing lists