[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.22.394.2107051622580.278143@gentwo.de>
Date: Mon, 5 Jul 2021 16:26:48 +0200 (CEST)
From: Christoph Lameter <cl@...two.de>
To: Marcelo Tosatti <mtosatti@...hat.com>
cc: linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Frederic Weisbecker <frederic@...nel.org>,
Juri Lelli <juri.lelli@...hat.com>,
Nitesh Lal <nilal@...hat.com>
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return
to userspace
On Fri, 2 Jul 2021, Marcelo Tosatti wrote:
> > > The logic to disable vmstat worker thread, when entering
> > > nohz full, does not cover all scenarios. For example, it is possible
> > > for the following to happen:
> > >
> > > 1) enter nohz_full, which calls refresh_cpu_vm_stats, syncing the stats.
> > > 2) app runs mlock, which increases counters for mlock'ed pages.
> > > 3) start -RT loop
> > >
> > > Since refresh_cpu_vm_stats from nohz_full logic can happen _before_
> > > the mlock, vmstat shepherd can restart vmstat worker thread on
> > > the CPU in question.
> >
> > Can we enter nohz_full after the app runs mlock?
>
> Hum, i don't think its a good idea to use that route, because
> entering or exiting nohz_full depends on a number of variable
> outside of one's control (and additional variables might be
> added in the future).
Then I do not see any need for this patch. Because after a certain time
of inactivity (after the mlock) the system will enter nohz_full again.
If userspace has no direct control over nohz_full and can only wait then
it just has to do so.
> So preparing the system to function
> while entering nohz_full at any location seems the sane thing to do.
>
> And that would be at return to userspace (since, if mlocked, after
> that point there will be no more changes to propagate to vmstat
> counters).
>
> Or am i missing something else you can think of ?
I assumed that the "enter nohz full" was an action by the user
space app because I saw some earlier patches to introduce such
functionality in the past.
Powered by blists - more mailing lists