[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3bd3f4d0-2504-a48c-adea-a28227a81d7a@linux.alibaba.com>
Date: Mon, 24 Feb 2020 11:09:06 +0800
From: 王贇 <yun.wang@...ux.alibaba.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Mel Gorman <mgorman@...e.de>, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Luis Chamberlain <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Iurii Zaikin <yzaikin@...gle.com>,
Michal Koutn? <mkoutny@...e.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.ibm.com>,
Randy Dunlap <rdunlap@...radead.org>,
Jonathan Corbet <corbet@....net>
Subject: Re: [PATCH RESEND v8 1/2] sched/numa: introduce per-cgroup NUMA
locality info
On 2020/2/21 下午11:28, Peter Zijlstra wrote:
> On Mon, Feb 17, 2020 at 09:23:52PM +0800, 王贇 wrote:
>> FYI, by monitoring locality, we found that the kvm vcpu thread is not
>> covered by NUMA Balancing, whatever how many maximum period passed, the
>> counters are not increasing, or very slowly, although inside guest we are
>> copying memory.
>>
>> Later we found such task rarely exit to user space to trigger task
>> work callbacks, and NUMA Balancing scan depends on that, which help us
>> realize the importance to enable NUMA Balancing inside guest, with the
>> correct NUMA topo, a big performance risk I'll say :-P
>
> That's a bug in KVM, see:
>
> https://lkml.kernel.org/r/20190801143657.785902257@linutronix.de
> https://lkml.kernel.org/r/20190801143657.887648487@linutronix.de
>
> ISTR there being newer versions of that patch-set, but I can't seem to
> find them in a hurry.
Aha, that's exactly the problem we saw, will check~
Regards,
Michael Wang
>
Powered by blists - more mailing lists