[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1456809426-19341-2-git-send-email-andi@firstfloor.org>
Date: Mon, 29 Feb 2016 21:17:05 -0800
From: Andi Kleen <andi@...stfloor.org>
To: tytso@....edu
Cc: linux-kernel@...r.kernel.org, Andi Kleen <ak@...ux.intel.com>
Subject: [PATCH 2/3] random: Make input to output pool balancing per cpu
From: Andi Kleen <ak@...ux.intel.com>
The load balancing from input pool to output pools was
essentially unlocked. Before it didn't matter much because
there were only two choices (blocking and non blocking).
But now with the distributed non blocking pools we have
a lot more pools, and unlocked access of the counters
may systematically deprive some nodes from their deserved
entropy.
Turn the round-robin state into per CPU variables
to avoid any possibility of races. This code already
runs with preemption disabled.
v2: Check for non initialized pools.
v3: Make per cpu variables global to avoid warnings in some
configurations (0day)
Signed-off-by: Andi Kleen <ak@...ux.intel.com>
---
drivers/char/random.c | 20 +++++++++++++-------
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index e7e02c0..21ae44b 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -675,6 +675,9 @@ void init_node_pools(void)
#endif
}
+static DEFINE_PER_CPU(struct entropy_store *, lastp) = &blocking_pool;
+static DEFINE_PER_CPU(int, next_pool);
+
/*
* Credit (or debit) the entropy store with n bits of entropy.
* Use credit_entropy_bits_safe() if the value comes from userspace
@@ -774,15 +777,17 @@ retry:
if (entropy_bits > random_write_wakeup_bits &&
r->initialized &&
r->entropy_total >= 2*random_read_wakeup_bits) {
- static struct entropy_store *last = &blocking_pool;
- static int next_pool = -1;
- struct entropy_store *other = &blocking_pool;
+ struct entropy_store *other = &blocking_pool, *last;
+ int np;
/* -1: use blocking pool, 0<=max_node: node nb pool */
- if (next_pool > -1)
- other = nonblocking_node_pool[next_pool];
- if (++next_pool >= num_possible_nodes())
- next_pool = -1;
+ np = __this_cpu_read(next_pool);
+ if (np > -1 && nonblocking_node_pool)
+ other = nonblocking_node_pool[np];
+ if (++np >= num_possible_nodes())
+ np = -1;
+ __this_cpu_write(next_pool, np);
+ last = __this_cpu_read(lastp);
if (other->entropy_count <=
3 * other->poolinfo->poolfracbits / 4)
last = other;
@@ -791,6 +796,7 @@ retry:
schedule_work(&last->push_work);
r->entropy_total = 0;
}
+ __this_cpu_write(lastp, last);
}
}
}
--
2.5.0
Powered by blists - more mailing lists