lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 1 Apr 2020 20:37:45 +0200
From:   Uladzislau Rezki <urezki@...il.com>
To:     "Paul E. McKenney" <paulmck@...nel.org>
Cc:     Uladzislau Rezki <urezki@...il.com>,
        Joel Fernandes <joel@...lfernandes.org>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        rcu@...r.kernel.org, willy@...radead.org, peterz@...radead.org,
        neilb@...e.com, vbabka@...e.cz, mgorman@...e.de,
        Andrew Morton <akpm@...ux-foundation.org>,
        Josh Triplett <josh@...htriplett.org>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH RFC] rcu/tree: Use GFP_MEMALLOC for alloc memory to free
 memory pattern

On Wed, Apr 01, 2020 at 11:26:15AM -0700, Paul E. McKenney wrote:
> On Wed, Apr 01, 2020 at 08:16:01PM +0200, Uladzislau Rezki wrote:
> > > > > 
> > > > > Right. Per discussion with Paul, we discussed that it is better if we
> > > > > pre-allocate N number of array blocks per-CPU and use it for the cache.
> > > > > Default for N being 1 and tunable with a boot parameter. I agree with this.
> > > > > 
> > > > As discussed before, we can make use of memory pool API for such
> > > > purpose. But i am not sure if it should be one pool per CPU or
> > > > one pool per NR_CPUS, that would contain NR_CPUS * N pre-allocated
> > > > blocks.
> > > 
> > > There are advantages and disadvantages either way.  The advantage of the
> > > per-CPU pool is that you don't have to worry about something like lock
> > > contention causing even more pain during an OOM event.  One potential
> > > problem wtih the per-CPU pool can happen when callbacks are offloaded,
> > > in which case the CPUs needing the memory might never be getting it,
> > > because in the offloaded case (RCU_NOCB_CPU=y) the CPU posting callbacks
> > > might never be invoking them.
> > > 
> > > But from what I know now, systems built with CONFIG_RCU_NOCB_CPU=y
> > > either don't have heavy callback loads (HPC systems) or are carefully
> > > configured (real-time systems).  Plus large systems would probably end
> > > up needing something pretty close to a slab allocator to keep from dying
> > > from lock contention, and it is hard to justify that level of complexity
> > > at this point.
> > > 
> > > Or is there some way to mark a specific slab allocator instance as being
> > > able to keep some amount of memory no matter what the OOM conditions are?
> > > If not, the current per-CPU pre-allocated cache is a better choice in the
> > > near term.
> > > 
> > As for mempool API:
> > 
> > mempool_alloc() just tries to make regular allocation taking into
> > account passed gfp_t bitmask. If it fails due to memory pressure,
> > it uses reserved preallocated pool that consists of number of
> > desirable elements(preallocated when a pool is created).
> > 
> > mempoll_free() returns an element to to pool, if it detects that
> > current reserved elements are lower then minimum allowed elements,
> > it will add an element to reserved pool, i.e. refill it. Otherwise
> > just call kfree() or whatever we define as "element-freeing function."
> 
> Unless I am missing something, mempool_alloc() acquires a per-mempool
> lock on each invocation under OOM conditions.  For our purposes, this
> is essentially a global lock.  This will not be at all acceptable on a
> large system.
> 
It uses pool->lock to access to reserved objects, so if we have one memory
pool per one CPU then it would be serialized.

--
Vlad Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ