lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 27 Sep 2022 00:04:57 +0200
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Cc:     "Jason A. Donenfeld" <Jason@...c4.com>,
        Sherry Yang <sherry.yang@...cle.com>,
        Paul Webb <paul.x.webb@...cle.com>,
        Phillip Goerl <phillip.goerl@...cle.com>,
        Jack Vogel <jack.vogel@...cle.com>,
        Nicky Veitch <nicky.veitch@...cle.com>,
        Colm Harrington <colm.harrington@...cle.com>,
        Ramanan Govindarajan <ramanan.govindarajan@...cle.com>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Tejun Heo <tj@...nel.org>,
        Sultan Alsawaf <sultan@...neltoast.com>, stable@...r.kernel.org
Subject: [PATCH v2] random: use immediate per-cpu timer rather than workqueue for mixing fast pool

Previously, the fast pool was dumped into the main pool peroidically in
the fast pool's hard IRQ handler. This worked fine and there weren't
problems with it, until RT came around. Since RT converts spinlocks into
sleeping locks, problems cropped up. Rather than switching to raw
spinlocks, the RT developers preferred we make the transformation from
originally doing:

    do_some_stuff()
    spin_lock()
    do_some_other_stuff()
    spin_unlock()

to doing:

    do_some_stuff()
    queue_work_on(some_other_stuff_worker)

This is an ordinary pattern done all over the kernel. However, Sherry
noticed a 10% performance regression in qperf TCP over a 40gbps
InfiniBand card. Quoting her message:

> MT27500 Family [ConnectX-3] cards:
> Infiniband device 'mlx4_0' port 1 status:
> default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1
> base lid: 0x6
> sm lid: 0x1
> state: 4: ACTIVE
> phys state: 5: LinkUp
> rate: 40 Gb/sec (4X QDR)
> link_layer: InfiniBand
>
> Cards are configured with IP addresses on private subnet for IPoIB
> performance testing.
> Regression identified in this bug is in TCP latency in this stack as reported
> by qperf tcp_lat metric:
>
> We have one system listen as a qperf server:
> [root@...rQperfServer ~]# qperf
>
> Have the other system connect to qperf server as a client (in this
> case, it’s X7 server with Mellanox card):
> [root@...rQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -oo msg_size:4K:1024K:*2 tcp_lat

Rather than incur the scheduling latency from queue_work_on, we can
instead switch to running on the next timer tick, on the same core,
deferrably so. This also batches things a bit more -- once per jiffy --
which is probably okay now that mix_interrupt_randomness() can credit
multiple bits at once. It still puts a bit of pressure on fast_mix(),
but hopefully that's acceptable.

Hopefully this restores performance from prior to the RT changes.

Reported-by: Sherry Yang <sherry.yang@...cle.com>
Reported-by: Paul Webb <paul.x.webb@...cle.com>
Cc: Sherry Yang <sherry.yang@...cle.com>
Cc: Phillip Goerl <phillip.goerl@...cle.com>
Cc: Jack Vogel <jack.vogel@...cle.com>
Cc: Nicky Veitch <nicky.veitch@...cle.com>
Cc: Colm Harrington <colm.harrington@...cle.com>
Cc: Ramanan Govindarajan <ramanan.govindarajan@...cle.com>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Tejun Heo <tj@...nel.org>
Cc: Sultan Alsawaf <sultan@...neltoast.com>
Cc: stable@...r.kernel.org
Fixes: 58340f8e952b ("random: defer fast pool mixing to worker")
Signed-off-by: Jason A. Donenfeld <Jason@...c4.com>
---
 drivers/char/random.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 1cb53495e8f7..08bb46a50802 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -928,17 +928,20 @@ struct fast_pool {
 	unsigned long pool[4];
 	unsigned long last;
 	unsigned int count;
-	struct work_struct mix;
+	struct timer_list mix;
 };
 
+static void mix_interrupt_randomness(struct timer_list *work);
+
 static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
 #ifdef CONFIG_64BIT
 #define FASTMIX_PERM SIPHASH_PERMUTATION
-	.pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }
+	.pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 },
 #else
 #define FASTMIX_PERM HSIPHASH_PERMUTATION
-	.pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 }
+	.pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 },
 #endif
+	.mix = __TIMER_INITIALIZER(mix_interrupt_randomness, TIMER_DEFERRABLE)
 };
 
 /*
@@ -980,7 +983,7 @@ int __cold random_online_cpu(unsigned int cpu)
 }
 #endif
 
-static void mix_interrupt_randomness(struct work_struct *work)
+static void mix_interrupt_randomness(struct timer_list *work)
 {
 	struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
 	/*
@@ -1034,10 +1037,11 @@ void add_interrupt_randomness(int irq)
 	if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
 		return;
 
-	if (unlikely(!fast_pool->mix.func))
-		INIT_WORK(&fast_pool->mix, mix_interrupt_randomness);
 	fast_pool->count |= MIX_INFLIGHT;
-	queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
+	if (!timer_pending(&fast_pool->mix)) {
+		fast_pool->mix.expires = jiffies;
+		add_timer_on(&fast_pool->mix, raw_smp_processor_id());
+	}
 }
 EXPORT_SYMBOL_GPL(add_interrupt_randomness);
 
-- 
2.37.3

Powered by blists - more mailing lists