lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 6 Oct 2015 15:05:39 -0700 From: Andi Kleen <andi@...stfloor.org> To: tytso@....edu Cc: linux-kernel@...r.kernel.org, Andi Kleen <ak@...ux.intel.com> Subject: [PATCH 2/3] random: Make input to output pool balancing per cpu From: Andi Kleen <ak@...ux.intel.com> The load balancing from input pool to output pools was essentially unlocked. Before it didn't matter much because there were only two choices (blocking and non blocking). But now with the distributed non blocking pools we have a lot more pools, and unlocked access of the counters may systematically deprive some nodes from their deserved entropy. Turn the round-robin state into per CPU variables to avoid any possibility of races. This code already runs with preemption disabled. v2: Check for non initialized pools. Signed-off-by: Andi Kleen <ak@...ux.intel.com> --- drivers/char/random.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index e7e02c0..a395f783 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -774,15 +774,20 @@ retry: if (entropy_bits > random_write_wakeup_bits && r->initialized && r->entropy_total >= 2*random_read_wakeup_bits) { - static struct entropy_store *last = &blocking_pool; - static int next_pool = -1; - struct entropy_store *other = &blocking_pool; + static DEFINE_PER_CPU(struct entropy_store *, lastp) = + &blocking_pool; + static DEFINE_PER_CPU(int, next_pool); + struct entropy_store *other = &blocking_pool, *last; + int np; /* -1: use blocking pool, 0<=max_node: node nb pool */ - if (next_pool > -1) - other = nonblocking_node_pool[next_pool]; - if (++next_pool >= num_possible_nodes()) - next_pool = -1; + np = __this_cpu_read(next_pool); + if (np > -1 && nonblocking_node_pool) + other = nonblocking_node_pool[np]; + if (++np >= num_possible_nodes()) + np = -1; + __this_cpu_write(next_pool, np); + last = __this_cpu_read(lastp); if (other->entropy_count <= 3 * other->poolinfo->poolfracbits / 4) last = other; @@ -791,6 +796,7 @@ retry: schedule_work(&last->push_work); r->entropy_total = 0; } + __this_cpu_write(lastp, last); } } } -- 2.4.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists