lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 24 Mar 2021 12:06:23 -0700
From:   Roman Gushchin <guro@...com>
To:     Dennis Zhou <dennis@...nel.org>
CC:     Tejun Heo <tj@...nel.org>, Christoph Lameter <cl@...ux.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
        Roman Gushchin <guro@...com>
Subject: [PATCH rfc 1/4] percpu: implement partial chunk depopulation

This patch implements partial depopulation of percpu chunks.

As now, a chunk can be depopulated only as a part of the final
destruction, when there are no more outstanding allocations. However
to minimize a memory waste, it might be useful to depopulate a
partially filed chunk, if a small number of outstanding allocations
prevents the chunk from being reclaimed.

This patch implements the following depopulation process: it scans
over the chunk pages, looks for a range of empty and populated pages
and performs the depopulation. To avoid races with new allocations,
the chunk is previously isolated. After the depopulation the chunk is
returned to the original slot (but is appended to the tail of the list
to minimize the chances of population).

Because the pcpu_lock is dropped while calling pcpu_depopulate_chunk(),
the chunk can be concurrently moved to a different slot. So we need
to isolate it again on each step. pcpu_alloc_mutex is held, so the
chunk can't be populated/depopulated asynchronously.

Signed-off-by: Roman Gushchin <guro@...com>
---
 mm/percpu.c | 90 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 90 insertions(+)

diff --git a/mm/percpu.c b/mm/percpu.c
index 6596a0a4286e..78c55c73fa28 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -2055,6 +2055,96 @@ static void __pcpu_balance_workfn(enum pcpu_chunk_type type)
 	mutex_unlock(&pcpu_alloc_mutex);
 }
 
+/**
+ * pcpu_shrink_populated - scan chunks and release unused pages to the system
+ * @type: chunk type
+ *
+ * Scan over all chunks, find those marked with the depopulate flag and
+ * try to release unused pages to the system. On every attempt clear the
+ * chunk's depopulate flag to avoid wasting CPU by scanning the same
+ * chunk again and again.
+ */
+static void pcpu_shrink_populated(enum pcpu_chunk_type type)
+{
+	struct list_head *pcpu_slot = pcpu_chunk_list(type);
+	struct pcpu_chunk *chunk;
+	int slot, i, off, start;
+
+	spin_lock_irq(&pcpu_lock);
+	for (slot = pcpu_nr_slots - 1; slot >= 0; slot--) {
+restart:
+		list_for_each_entry(chunk, &pcpu_slot[slot], list) {
+			bool isolated = false;
+
+			if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_HIGH)
+				break;
+
+			for (i = 0, start = -1; i < chunk->nr_pages; i++) {
+				if (!chunk->nr_empty_pop_pages)
+					break;
+
+				/*
+				 * If the page is empty and populated, start or
+				 * extend the [start, i) range.
+				 */
+				if (test_bit(i, chunk->populated)) {
+					off = find_first_bit(
+						pcpu_index_alloc_map(chunk, i),
+						PCPU_BITMAP_BLOCK_BITS);
+					if (off >= PCPU_BITMAP_BLOCK_BITS) {
+						if (start == -1)
+							start = i;
+						continue;
+					}
+				}
+
+				/*
+				 * Otherwise check if there is an active range,
+				 * and if yes, depopulate it.
+				 */
+				if (start == -1)
+					continue;
+
+				/*
+				 * Isolate the chunk, so new allocations
+				 * wouldn't be served using this chunk.
+				 * Async releases can still happen.
+				 */
+				if (!list_empty(&chunk->list)) {
+					list_del_init(&chunk->list);
+					isolated = true;
+				}
+
+				spin_unlock_irq(&pcpu_lock);
+				pcpu_depopulate_chunk(chunk, start, i);
+				cond_resched();
+				spin_lock_irq(&pcpu_lock);
+
+				pcpu_chunk_depopulated(chunk, start, i);
+
+				/*
+				 * Reset the range and continue.
+				 */
+				start = -1;
+			}
+
+			if (isolated) {
+				/*
+				 * The chunk could have been moved while
+				 * pcpu_lock wasn't held. Make sure we put
+				 * the chunk back into the slot and restart
+				 * the scanning.
+				 */
+				if (list_empty(&chunk->list))
+					list_add_tail(&chunk->list,
+						      &pcpu_slot[slot]);
+				goto restart;
+			}
+		}
+	}
+	spin_unlock_irq(&pcpu_lock);
+}
+
 /**
  * pcpu_balance_workfn - manage the amount of free chunks and populated pages
  * @work: unused
-- 
2.30.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ