[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1527989886.7898.96.camel@surriel.com>
Date: Sat, 02 Jun 2018 21:38:06 -0400
From: Rik van Riel <riel@...riel.com>
To: Song Liu <songliubraving@...com>, Andy Lutomirski <luto@...nel.org>
Cc: Mike Galbraith <efault@....de>,
LKML <linux-kernel@...r.kernel.org>,
Kernel Team <Kernel-team@...com>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>, X86 ML <x86@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH] x86,switch_mm: skip atomic operations for init_mm
On Sun, 2018-06-03 at 00:51 +0000, Song Liu wrote:
> > Just to check: in the workload where you're seeing this problem,
> > are
> > you using an mm with many threads? I would imagine that, if you
> > only
> > have one or two threads, the bit operations aren't so bad.
>
> Yes, we are running netperf/netserver with 300 threads. We don't see
> this much overhead in with real workload.
We may not, but there are some crazy workloads out
there in the world. Think of some Java programs with
thousands of threads, causing a million context
switches a second on a large system.
I like Andy's idea of having one cache line with
a cpumask per node. That seems like it will have
fewer downsides for tasks with fewer threads running
on giant systems.
I'll throw out the code I was working on, and look
into implementing that :)
--
All Rights Reversed.
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists