[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190906145327.32552.39455.stgit@localhost.localdomain>
Date: Fri, 06 Sep 2019 07:53:27 -0700
From: Alexander Duyck <alexander.duyck@...il.com>
To: nitesh@...hat.com, kvm@...r.kernel.org, mst@...hat.com,
david@...hat.com, dave.hansen@...el.com,
linux-kernel@...r.kernel.org, willy@...radead.org,
mhocko@...nel.org, linux-mm@...ck.org, akpm@...ux-foundation.org,
virtio-dev@...ts.oasis-open.org, osalvador@...e.de
Cc: yang.zhang.wz@...il.com, pagupta@...hat.com, riel@...riel.com,
konrad.wilk@...cle.com, lcapitulino@...hat.com,
wei.w.wang@...el.com, aarcange@...hat.com, pbonzini@...hat.com,
dan.j.williams@...el.com, alexander.h.duyck@...ux.intel.com
Subject: [PATCH v8 1/7] mm: Add per-cpu logic to page shuffling
From: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
Change the logic used to generate randomness in the suffle path so that we
can avoid cache line bouncing. The previous logic was sharing the offset
and entropy word between all CPUs. As such this can result in cache line
bouncing and will ultimately hurt performance when enabled.
To resolve this I have moved to a per-cpu logic for maintaining a unsigned
long containing some amount of bits, and an offset value for which bit we
can use for entropy with each call.
Signed-off-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
---
mm/shuffle.c | 33 +++++++++++++++++++++++----------
1 file changed, 23 insertions(+), 10 deletions(-)
diff --git a/mm/shuffle.c b/mm/shuffle.c
index 3ce12481b1dc..9ba542ecf335 100644
--- a/mm/shuffle.c
+++ b/mm/shuffle.c
@@ -183,25 +183,38 @@ void __meminit __shuffle_free_memory(pg_data_t *pgdat)
shuffle_zone(z);
}
+struct batched_bit_entropy {
+ unsigned long entropy_bool;
+ int position;
+};
+
+static DEFINE_PER_CPU(struct batched_bit_entropy, batched_entropy_bool);
+
void add_to_free_area_random(struct page *page, struct free_area *area,
int migratetype)
{
- static u64 rand;
- static u8 rand_bits;
+ struct batched_bit_entropy *batch;
+ unsigned long entropy;
+ int position;
/*
- * The lack of locking is deliberate. If 2 threads race to
- * update the rand state it just adds to the entropy.
+ * We shouldn't need to disable IRQs as the only caller is
+ * __free_one_page and it should only be called with the zone lock
+ * held and either from IRQ context or with local IRQs disabled.
*/
- if (rand_bits == 0) {
- rand_bits = 64;
- rand = get_random_u64();
+ batch = raw_cpu_ptr(&batched_entropy_bool);
+ position = batch->position;
+
+ if (--position < 0) {
+ batch->entropy_bool = get_random_long();
+ position = BITS_PER_LONG - 1;
}
- if (rand & 1)
+ batch->position = position;
+ entropy = batch->entropy_bool;
+
+ if (1ul & (entropy >> position))
add_to_free_area(page, area, migratetype);
else
add_to_free_area_tail(page, area, migratetype);
- rand_bits--;
- rand >>= 1;
}
Powered by blists - more mailing lists