[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <386c175e-bd10-2d5d-6051-4065f6f9b84a@intel.com>
Date: Fri, 15 Sep 2023 14:57:20 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Matteo Rizzo <matteorizzo@...gle.com>, cl@...ux.com,
penberg@...nel.org, rientjes@...gle.com, iamjoonsoo.kim@....com,
akpm@...ux-foundation.org, vbabka@...e.cz,
roman.gushchin@...ux.dev, 42.hyeyoo@...il.com,
keescook@...omium.org, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org, linux-mm@...ck.org,
linux-hardening@...r.kernel.org, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, corbet@....net, luto@...nel.org,
peterz@...radead.org
Cc: jannh@...gle.com, evn@...gle.com, poprdi@...gle.com,
jordyzomer@...gle.com
Subject: Re: [RFC PATCH 11/14] mm/slub: allocate slabs from virtual memory
On 9/15/23 03:59, Matteo Rizzo wrote:
> + spin_lock_irqsave(&slub_kworker_lock, irq_flags);
> + list_splice_init(&slub_tlbflush_queue, &local_queue);
> + list_for_each_entry(slab, &local_queue, flush_list_elem) {
> + unsigned long start = (unsigned long)slab_to_virt(slab);
> + unsigned long end = start + PAGE_SIZE *
> + (1UL << oo_order(slab->oo));
> +
> + if (start < addr_start)
> + addr_start = start;
> + if (end > addr_end)
> + addr_end = end;
> + }
> + spin_unlock_irqrestore(&slub_kworker_lock, irq_flags);
> +
> + if (addr_start < addr_end)
> + flush_tlb_kernel_range(addr_start, addr_end);
I assume that the TLB flushes in the queue are going to be pretty sparse
on average.
At least on x86, flush_tlb_kernel_range() falls back pretty quickly from
individual address invalidation to just doing a full flush. It might
not even be worth tracking the address ranges, and just do a full flush
every time.
I'd be really curious to see how often actual ranged flushes are
triggered from this code. I expect it would be very near zero.
Powered by blists - more mailing lists