lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 12 Oct 2021 17:45:42 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Nicolas Saenz Julienne <nsaenzju@...hat.com>,
        akpm@...ux-foundation.org
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        frederic@...nel.org, tglx@...utronix.de, peterz@...radead.org,
        mtosatti@...hat.com, nilal@...hat.com, mgorman@...e.de,
        linux-rt-users@...r.kernel.org, cl@...ux.com, paulmck@...nel.org,
        ppandit@...hat.com
Subject: Re: [RFC 0/3] mm/page_alloc: Remote per-cpu lists drain support

On 10/8/21 18:19, Nicolas Saenz Julienne wrote:
> This series replaces mm/page_alloc's per-cpu lists drain mechanism in order for
> it to be able to be run remotely. Currently, only a local CPU is permitted to
> change its per-cpu lists, and it's expected to do so, on-demand, whenever a
> process demands it (by means of queueing a drain task on the local CPU). Most
> systems will handle this promptly, but it'll cause problems for NOHZ_FULL CPUs
> that can't take any sort of interruption without breaking their functional
> guarantees (latency, bandwidth, etc...). Having a way for these processes to
> remotely drain the lists themselves will make co-existing with isolated CPUs
> possible, and comes with minimal performance[1]/memory cost to other users.
> 
> The new algorithm will atomically switch the pointer to the per-cpu lists and
> use RCU to make sure it's not being used before draining them. 
> 
> I'm interested in an sort of feedback, but especially validating that the
> approach is acceptable, and any tests/benchmarks you'd like to see run against

So let's consider the added alloc/free fast paths overhead:
- Patch 1 - __alloc_pages_bulk() used to determine pcp_list once, now it's
determined for each allocated page in __rmqueue_pcplist().
- Patch 2 - adds indirection from pcp->$foo to pcp->lp->$foo in each operation
- Patch 3
  - extra irqsave/irqrestore in free_pcppages_bulk (amortized)
  - rcu_dereference_check() in free_unref_page_commit() and __rmqueue_pcplist()

BTW - I'm not sure if the RCU usage is valid here.

The "read side" (normal operations) is using:
rcu_dereference_check(pcp->lp,
		lockdep_is_held(this_cpu_ptr(&pagesets.lock)));

where the lockdep parameter according to the comments for
rcu_dereference_check() means

"indicate to lockdep that foo->bar may only be dereferenced if either
rcu_read_lock() is held, or that the lock required to replace the bar struct
at foo->bar is held."

but you are not taking rcu_read_lock() and the "write side" (remote
draining) actually doesn't take pagesets.lock, so it's not true that the
"lock required to replace ... is held"? The write side uses
rcu_replace_pointer(...,
			mutex_is_locked(&pcpu_drain_mutex))
which is a different lock.

IOW, synchronize_rcu_expedited() AFAICS has nothing (no rcu_read_lock() to
synchronize against? Might accidentally work on !RT thanks to disabled irqs,
but not sure about with RT lock semantics of the local_lock...

So back to overhead, if I'm correct above we can assume that there would be
also rcu_read_lock() in the fast paths.

The alternative proposed by tglx was IIRC that there would be a spinlock on
each cpu, which would be mostly uncontended except when draining. Maybe an
uncontended spin lock/unlock would have lower overhead than all of the
above? It would be certainly simpler, so I would probably try that first and
see if it's acceptable?

> it. For now, I've been testing this successfully on both arm64 and x86_64
> systems while forcing high memory pressure (i.e. forcing the
> page_alloc's slow path).
> 
> Patches 1-2 serve as cleanups/preparation to make patch 3 easier to follow.
> 
> Here's my previous attempt at fixing this:
> https://lkml.org/lkml/2021/9/21/599
> 
> [1] Proper performance numbers will be provided if the approach is deemed
>     acceptable. That said, mm/page_alloc.c's fast paths only grow by an extra
>     pointer indirection and a compiler barrier, which I think is unlikely to be
>     measurable.
> 
> ---
> 
> Nicolas Saenz Julienne (3):
>   mm/page_alloc: Simplify __rmqueue_pcplist()'s arguments
>   mm/page_alloc: Access lists in 'struct per_cpu_pages' indirectly
>   mm/page_alloc: Add remote draining support to per-cpu lists
> 
>  include/linux/mmzone.h |  24 +++++-
>  mm/page_alloc.c        | 173 +++++++++++++++++++++--------------------
>  mm/vmstat.c            |   6 +-
>  3 files changed, 114 insertions(+), 89 deletions(-)
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ