lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211103170512.2745765-1-nsaenzju@redhat.com>
Date:   Wed,  3 Nov 2021 18:05:09 +0100
From:   Nicolas Saenz Julienne <nsaenzju@...hat.com>
To:     akpm@...ux-foundation.org
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        frederic@...nel.org, tglx@...utronix.de, peterz@...radead.org,
        mtosatti@...hat.com, nilal@...hat.com, mgorman@...e.de,
        linux-rt-users@...r.kernel.org, vbabka@...e.cz, cl@...ux.com,
        ppandit@...hat.com, Nicolas Saenz Julienne <nsaenzju@...hat.com>
Subject: [PATCH v2 0/3] mm/page_alloc: Remote per-cpu page list drain support

This series introduces a new locking scheme around mm/page_alloc.c's per-cpu
page lists which will allow for remote CPUs to drain them. Currently, only a
local CPU is permitted to change its per-cpu lists, and it's expected to do so,
on-demand, whenever a process demands it (by means of queueing an drain task on
the local CPU). Most systems will handle this promptly, but it'll cause
problems for NOHZ_FULL CPUs that can't take any sort of interruption without
breaking their functional guarantees (latency, bandwidth, etc...).

This new locking scheme, based on per-cpu spinlocks, is the simpler and more
maintainable approach so far[1], although also has some drawbacks: it comes
with a small performance. Depending on the page allocation code path
micro-benchmark we can expect 0% to 0.6% degradation on x86_64, and 0% to 2% on
arm64[2].

Assuming there is nothing too horrible in the patches themselves I believe it
all comes down to whether we prefer to take the small performance hit vs the
maintenance burden of a more complex solution[1]. I don't have enough
experience with performance tuning, nor with maintenance to have an
authoritative opinion here, so I'll defer to whatever is hopefully discussed
here. Also, I'll be happy to run any extra tests that I might have missed.

Patch #1 could be taken regardless of the rest of the series as it removes dead
code.

The series is based on today's linux-next. 

Changes since v2:
 - Provide performance numbers
 - Unanimously use per-cpu spinlocks

[1] Other approaches can be found here:

  - Static branch conditional on nohz_full, no performance loss, the extra
    config option makes is painful to maintain (v1):
    https://lore.kernel.org/linux-mm/20210921161323.607817-5-nsaenzju@redhat.com/

  - RCU based approach, complex, yet a bit less taxing performance wise
    (RFC):
    https://lore.kernel.org/linux-mm/20211008161922.942459-4-nsaenzju@redhat.com/

[2] See individual patches for in-depth results

---

Nicolas Saenz Julienne (3):
  mm/page_alloc: Don't pass pfn to free_unref_page_commit()
  mm/page_alloc: Convert per-cpu lists' local locks to per-cpu spin
    locks
  mm/page_alloc: Remotely drain per-cpu lists

 include/linux/mmzone.h |   1 +
 mm/page_alloc.c        | 151 ++++++++++++++---------------------------
 2 files changed, 52 insertions(+), 100 deletions(-)

-- 
2.33.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ