[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1532865776.28585.4.camel@surriel.com>
Date: Sun, 29 Jul 2018 08:02:56 -0400
From: Rik van Riel <riel@...riel.com>
To: Andy Lutomirski <luto@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
kernel-team <kernel-team@...com>,
Peter Zijlstra <peterz@...radead.org>, X86 ML <x86@...nel.org>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Ingo Molnar <mingo@...nel.org>, Mike Galbraith <efault@....de>,
Dave Hansen <dave.hansen@...el.com>, will.daecon@....com,
Catalin Marinas <catalin.marinas@....com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: [PATCH 04/10] x86,mm: use on_each_cpu_cond for TLB flushes
On Sat, 2018-07-28 at 19:58 -0700, Andy Lutomirski wrote:
> On Sat, Jul 28, 2018 at 2:53 PM, Rik van Riel <riel@...riel.com>
> wrote:
> > Instead of open coding bitmap magic, use on_each_cpu_cond
> > to determine which CPUs to send TLB flush IPIs to.
> >
> > This might be a little bit slower than examining the bitmaps,
> > but it should be a lot easier to maintain in the long run.
>
> Looks good.
>
> i assume it's not easy to get the remove-tables case to do a single
> on_each_cpu_cond() instead of two? Currently it's doing the lazy
> ones and the non-lazy ones separately.
Indeed. The TLB gather batch size means we need to send
IPIs to the non-lazy CPUs whenever we have gathered so
many pages that our tlb_gather data structure is full.
This could result in many IPIs during a large munmap.
The lazy CPUs get one IPI before page table freeing.
--
All Rights Reversed.
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists