lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 1 Dec 2021 11:01:50 -0300
From:   Marcelo Tosatti <mtosatti@...hat.com>
To:     Nicolas Saenz Julienne <nsaenzju@...hat.com>
Cc:     Vlastimil Babka <vbabka@...e.cz>, akpm@...ux-foundation.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        frederic@...nel.org, tglx@...utronix.de, peterz@...radead.org,
        nilal@...hat.com, mgorman@...e.de, linux-rt-users@...r.kernel.org,
        cl@...ux.com, ppandit@...hat.com
Subject: Re: [PATCH v2 0/3] mm/page_alloc: Remote per-cpu page list drain
 support

On Tue, Nov 30, 2021 at 07:09:23PM +0100, Nicolas Saenz Julienne wrote:
> Hi Vlastimil, sorry for the late reply and thanks for your feedback. :)
> 
> On Tue, 2021-11-23 at 15:58 +0100, Vlastimil Babka wrote:
> > > [1] Other approaches can be found here:
> > > 
> > >   - Static branch conditional on nohz_full, no performance loss, the extra
> > >     config option makes is painful to maintain (v1):
> > >     https://lore.kernel.org/linux-mm/20210921161323.607817-5-nsaenzju@redhat.com/
> > > 
> > >   - RCU based approach, complex, yet a bit less taxing performance wise
> > >     (RFC):
> > >     https://lore.kernel.org/linux-mm/20211008161922.942459-4-nsaenzju@redhat.com/
> > 
> > Hm I wonder if there might still be another alternative possible. IIRC I did
> > propose at some point a local drain on the NOHZ cpu before returning to
> > userspace, and then avoiding that cpu in remote drains, but tglx didn't like
> > the idea of making entering the NOHZ full mode more expensive [1].
> > 
> > But what if we instead set pcp->high = 0 for these cpus so they would avoid
> > populating the pcplists in the first place? Then there wouldn't have to be a
> > drain at all. On the other hand page allocator operations would not benefit
> > from zone lock batching on those cpus. But perhaps that would be acceptable
> > tradeoff, as a nohz cpu is expected to run in userspace most of the time,
> > and page allocator operations are rare except maybe some initial page
> > faults? (I assume those kind of workloads pre-populate and/or mlock their
> > address space anyway).
> 
> I've looked a bit into this and it seems straightforward. Our workloads
> pre-populate everything, and a slight statup performance hit is not that tragic
> (I'll measure it nonetheless). The per-cpu nohz_full state at some point will
> be dynamic, but the feature seems simple to disable/enable. I'll have to teach
> __drain_all_pages(zone, force_all_cpus=true) to bypass this special case
> but that's all. I might have a go at this.
> 
> Thanks!
> 
> -- 
> Nicolás Sáenz

True, but a nohz cpu does not necessarily have to run in userspace most
of the time. For example, an application can enter nohz full mode, 
go back to userspace, idle, return from idle all without leaving
nohz_full mode.

So its not clear that nohz_full is an appropriate trigger for setting
pcp->high = 0. Perhaps a task isolation feature would be an appropriate
location.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ