[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZD/xE6kR4RSOvUlR@tpad>
Date: Wed, 19 Apr 2023 10:48:03 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>,
Aaron Tomlin <atomlin@...mlin.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Russell King <linux@...linux.org.uk>,
Huacai Chen <chenhuacai@...nel.org>,
Heiko Carstens <hca@...ux.ibm.com>, x86@...nel.org,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...e.com>
Subject: Re: [PATCH v7 00/13] fold per-CPU vmstats remotely
On Wed, Apr 19, 2023 at 02:24:01PM +0200, Frederic Weisbecker wrote:
> Le Wed, Apr 19, 2023 at 08:59:28AM -0300, Marcelo Tosatti a écrit :
> > On Wed, Apr 19, 2023 at 08:29:47AM -0300, Marcelo Tosatti wrote:
> > > On Wed, Apr 19, 2023 at 08:14:09AM -0300, Marcelo Tosatti wrote:
> > > > This was tried before:
> > > > https://lore.kernel.org/lkml/20220127173037.318440631@fedora.localdomain/
> > > >
> > > > My conclusion from that discussion (and work) is that a special system
> > > > call:
> > > >
> > > > 1) Does not allow the benefits to be widely applied (only modified
> > > > applications will benefit). Is not portable across different operating systems.
> > > >
> > > > Removing the vmstat_work interruption is a benefit for HPC workloads,
> > > > for example (in fact, it is a benefit for any kind of application,
> > > > since the interruption causes cache misses).
> > > >
> > > > 2) Increases the system call cost for applications which would use
> > > > the interface.
> > > >
> > > > So avoiding the vmstat_update update interruption, without userspace
> > > > knowledge and modifications, is a better than solution than a modified
> > > > userspace.
> > >
> > > Another important point is this: if an application dirties
> > > its own per-CPU vmstat cache, while performing a system call,
> >
> > Or while handling a VM-exit from a vCPU.
> >
> > This are, in my mind, sufficient reasons to discard the "flush per-cpu
> > caches" idea. This is also why i chose to abandon the prctrl interface
> > patchset.
>
> If you're running your isolated workloads on guests, which sounds quite
> challenging but I guess you guys managed, I'd expect that VMEXITs are
> absolutely out of question while the task runs critical code, so I'm not
> sure why you would care. I guess not only your guests but also your hosts
> run nohz_full, right?
The answer is: there are VM-exits. For example to write MSRs to program
LAPIC timer.
Yes both host and guest are nohz_full (but for example, cyclictest
or a PLC program can call nanosleep in the guest which translate to
MSR writes to program LAPIC timer which is a VM-exit).
> I can't tell if the prctl solution which quiesces everything is the solution
> for you, I don't know well enough your workloads, but I would expect that
> the pattern is as follows:
>
> 1) Arrange for full isolation (no more interrupts/exceptions/VMEXITs)
Yes, this in the general scheme. Full isolation is automated by
tuned (realtime-virtual-host/realtime-virtual-guest profiles).
There are VM-exits in our use-case.
There might be use-cases where interrupts are desired.
For more details:
https://www.youtube.com/watch?v=SyhfctYqjc8
> 2) Run critical code
> 3) Optionally do something once you're done
>
> If vmstat is going to be the only thing to wait for on 1), then the remote
> solution looks good enough (although I leave that to -mm guys as I'm too
> clueless about those matters),
I am mostly clueless too, but i don't see a problem with the proposed
patch (and no one has pointed any problem either).
> if there is more to be expected, I guess the
> quiescing prctl (or whatever syscall) is something to consider.
>
> Thanks.
I don't know of anything else to consider ATM, and for all cases we have
analyzed so far there has always been the possibility to do the work remotely,
via RCU or some other locking scheme, rather than requiring the application
to be modified (which decreases the number of userspace applications that
can benefit).
Powered by blists - more mailing lists