lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200401134745.GV19865@paulmck-ThinkPad-P72>
Date:   Wed, 1 Apr 2020 06:47:45 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Uladzislau Rezki <urezki@...il.com>
Cc:     Joel Fernandes <joel@...lfernandes.org>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        rcu@...r.kernel.org, willy@...radead.org, peterz@...radead.org,
        neilb@...e.com, vbabka@...e.cz, mgorman@...e.de,
        Andrew Morton <akpm@...ux-foundation.org>,
        Josh Triplett <josh@...htriplett.org>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH RFC] rcu/tree: Use GFP_MEMALLOC for alloc memory to free
 memory pattern

On Wed, Apr 01, 2020 at 02:25:50PM +0200, Uladzislau Rezki wrote:

[ . . . ]

> > > > Paul was concerned about following scenario with hitting synchronize_rcu():
> > > > 1. Consider a system under memory pressure.
> > > > 2. Consider some other subsystem X depending on another system Y which uses
> > > >    kfree_rcu(). If Y doesn't complete the operation in time, X accumulates
> > > >    more memory.
> > > > 3. Since kfree_rcu() on Y hits synchronize_rcu() a lot, it slows it down.
> > > >    This causes X to further allocate memory, further causing a chain
> > > >    reaction.
> > > > Paul, please correct me if I'm wrong.
> > > > 
> > > I see your point and agree that in theory it can happen. So, we should
> > > make it more tight when it comes to rcu_head attachment logic.
> > 
> > Right. Per discussion with Paul, we discussed that it is better if we
> > pre-allocate N number of array blocks per-CPU and use it for the cache.
> > Default for N being 1 and tunable with a boot parameter. I agree with this.
> > 
> As discussed before, we can make use of memory pool API for such
> purpose. But i am not sure if it should be one pool per CPU or
> one pool per NR_CPUS, that would contain NR_CPUS * N pre-allocated
> blocks.

There are advantages and disadvantages either way.  The advantage of the
per-CPU pool is that you don't have to worry about something like lock
contention causing even more pain during an OOM event.  One potential
problem wtih the per-CPU pool can happen when callbacks are offloaded,
in which case the CPUs needing the memory might never be getting it,
because in the offloaded case (RCU_NOCB_CPU=y) the CPU posting callbacks
might never be invoking them.

But from what I know now, systems built with CONFIG_RCU_NOCB_CPU=y
either don't have heavy callback loads (HPC systems) or are carefully
configured (real-time systems).  Plus large systems would probably end
up needing something pretty close to a slab allocator to keep from dying
from lock contention, and it is hard to justify that level of complexity
at this point.

Or is there some way to mark a specific slab allocator instance as being
able to keep some amount of memory no matter what the OOM conditions are?
If not, the current per-CPU pre-allocated cache is a better choice in the
near term.

							Thanx, Paul

> > In current code, we have 1 cache page per CPU, but this is allocated only on
> > the first kvfree_rcu() request. So we could change this behavior as well to
> > make it pre-allocated.
> > 
> > Does this all sound good to you?
> > 
> I think that makes sense :)
> 
> --
> Vlad Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ