[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200221152824.GH18400@hirez.programming.kicks-ass.net>
Date: Fri, 21 Feb 2020 16:28:24 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: 王贇 <yun.wang@...ux.alibaba.com>
Cc: Mel Gorman <mgorman@...e.de>, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Luis Chamberlain <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Iurii Zaikin <yzaikin@...gle.com>,
Michal Koutn? <mkoutny@...e.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.ibm.com>,
Randy Dunlap <rdunlap@...radead.org>,
Jonathan Corbet <corbet@....net>
Subject: Re: [PATCH RESEND v8 1/2] sched/numa: introduce per-cgroup NUMA
locality info
On Mon, Feb 17, 2020 at 09:23:52PM +0800, 王贇 wrote:
> FYI, by monitoring locality, we found that the kvm vcpu thread is not
> covered by NUMA Balancing, whatever how many maximum period passed, the
> counters are not increasing, or very slowly, although inside guest we are
> copying memory.
>
> Later we found such task rarely exit to user space to trigger task
> work callbacks, and NUMA Balancing scan depends on that, which help us
> realize the importance to enable NUMA Balancing inside guest, with the
> correct NUMA topo, a big performance risk I'll say :-P
That's a bug in KVM, see:
https://lkml.kernel.org/r/20190801143657.785902257@linutronix.de
https://lkml.kernel.org/r/20190801143657.887648487@linutronix.de
ISTR there being newer versions of that patch-set, but I can't seem to
find them in a hurry.
Powered by blists - more mailing lists