[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210702115235.GA238161@fuller.cnet>
Date: Fri, 2 Jul 2021 08:52:35 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Christoph Lameter <cl@...two.de>
Cc: linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Frederic Weisbecker <frederic@...nel.org>,
Juri Lelli <juri.lelli@...hat.com>,
Nitesh Lal <nilal@...hat.com>
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return to
userspace
Hi Christoph,
On Fri, Jul 02, 2021 at 10:00:11AM +0200, Christoph Lameter wrote:
> On Thu, 1 Jul 2021, Marcelo Tosatti wrote:
>
> > The logic to disable vmstat worker thread, when entering
> > nohz full, does not cover all scenarios. For example, it is possible
> > for the following to happen:
> >
> > 1) enter nohz_full, which calls refresh_cpu_vm_stats, syncing the stats.
> > 2) app runs mlock, which increases counters for mlock'ed pages.
> > 3) start -RT loop
> >
> > Since refresh_cpu_vm_stats from nohz_full logic can happen _before_
> > the mlock, vmstat shepherd can restart vmstat worker thread on
> > the CPU in question.
>
> Can we enter nohz_full after the app runs mlock?
>
> > To fix this, optionally sync the vmstat counters when returning
> > from userspace, controllable by a new "vmstat_sync" isolcpus
> > flags (default off).
> >
> > See individual patches for details.
>
> Wow... This is going into some performance sensitive VM counters here and
> adds code to their primitives.
Yes, but it should all be under static key (therefore the performance
impact, when isolcpus=vmstat_sync,CPULIST is not enabled, should be
zero) (if the patchset is correct! ...).
For the case where isolcpus=vmstat_sync is enabled, the most important
performance aspect is the latency spike which this patch is dealing
with.
> Isnt there a simpler solution that does not require this amount of
> changes?
The one other change (I can think of) which could solve this problem
would be allowing remote access to per-CPU vmstat counters
(requiring a local_lock to be added), which seems to be more complex
than this.
Powered by blists - more mailing lists