lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 1 Sep 2020 12:24:38 +0200
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Willy Tarreau <w@....eu>, linux-kernel@...r.kernel.org,
        netdev@...r.kernel.org
Cc:     Sedat Dilek <sedat.dilek@...il.com>, George Spelvin <lkml@....org>,
        Amit Klein <aksecurity@...il.com>,
        Eric Dumazet <edumazet@...gle.com>,
        "Jason A. Donenfeld" <Jason@...c4.com>,
        Andy Lutomirski <luto@...nel.org>,
        Kees Cook <keescook@...omium.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>, tytso@....edu,
        Florian Westphal <fw@...len.de>,
        Marc Plumb <lkml.mplumb@...il.com>
Subject: Re: [PATCH 2/2] random32: add noise from network and scheduling
 activity



On 8/31/20 11:43 PM, Willy Tarreau wrote:
> With the removal of the interrupt perturbations in previous random32
> change (random32: make prandom_u32() output unpredictable), the PRNG
> has become 100% deterministic again. While SipHash is expected to be
> way more robust against brute force than the previous Tausworthe LFSR,
> there's still the risk that whoever has even one temporary access to
> the PRNG's internal state is able to predict all subsequent draws till
> the next reseed (roughly every minute). This may happen through a side
> channel attack or any data leak.
> 
> This patch restores the spirit of commit f227e3ec3b5c ("random32: update
> the net random state on interrupt and activity") in that it will perturb
> the internal PRNG's statee using externally collected noise, except that
> it will not pick that noise from the random pool's bits nor upon
> interrupt, but will rather combine a few elements along the Tx path
> that are collectively hard to predict, such as dev, skb and txq
> pointers, packet length and jiffies values. These ones are combined
> using a single round of SipHash into a single long variable that is
> mixed with the net_rand_state upon each invocation.
> 
> The operation was inlined because it produces very small and efficient
> code, typically 3 xor, 2 add and 2 rol. The performance was measured
> to be the same (even very slightly better) than before the switch to
> SipHash; on a 6-core 12-thread Core i7-8700k equipped with a 40G NIC
> (i40e), the connection rate dropped from 556k/s to 555k/s while the
> SYN cookie rate grew from 5.38 Mpps to 5.45 Mpps.
> 

> diff --git a/net/core/dev.c b/net/core/dev.c
> index b9c6f31ae96e..e075f7e0785a 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -144,6 +144,7 @@
>  #include <linux/indirect_call_wrapper.h>
>  #include <net/devlink.h>
>  #include <linux/pm_runtime.h>
> +#include <linux/prandom.h>
>  
>  #include "net-sysfs.h"
>  
> @@ -3557,6 +3558,7 @@ static int xmit_one(struct sk_buff *skb, struct net_device *dev,
>  		dev_queue_xmit_nit(skb, dev);
>  
>  	len = skb->len;
> +	PRANDOM_ADD_NOISE(skb, dev, txq, len + jiffies);
>  	trace_net_dev_start_xmit(skb, dev);
>  	rc = netdev_start_xmit(skb, dev, txq, more);
>  	trace_net_dev_xmit(skb, rc, dev, len);
> @@ -4129,6 +4131,7 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
>  			if (!skb)
>  				goto out;
>  
> +			PRANDOM_ADD_NOISE(skb, dev, txq, jiffies);
>  			HARD_TX_LOCK(dev, txq, cpu);
>  
>  			if (!netif_xmit_stopped(txq)) {
> @@ -4194,6 +4197,7 @@ int dev_direct_xmit(struct sk_buff *skb, u16 queue_id)
>  
>  	skb_set_queue_mapping(skb, queue_id);
>  	txq = skb_get_tx_queue(dev, skb);
> +	PRANDOM_ADD_NOISE(skb, dev, txq, jiffies);
>  
>  	local_bh_disable();
>  
> 

Hi Willy

There is not much entropy here really :

1) dev & txq are mostly constant on a typical host (at least the kind of hosts that is targeted by 
Amit Klein and others in their attacks.

2) len is also known by the attacker, attacking an idle host.

3) skb are also allocations from slab cache, which tend to recycle always the same pointers (on idle hosts)


4) jiffies might be incremented every 4 ms (if HZ=250)

Maybe we could feed percpu prandom noise with samples of ns resolution timestamps,
lazily cached from ktime_get() or similar functions.

This would use one instruction on x86 to update the cache, with maybe more generic noise.

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 4c47f388a83f17860fdafa3229bba0cc605ec25a..a3e026cbbb6e8c5499ed780e57de5fa09bc010b6 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -751,7 +751,7 @@ ktime_t ktime_get(void)
 {
        struct timekeeper *tk = &tk_core.timekeeper;
        unsigned int seq;
-       ktime_t base;
+       ktime_t res, base;
        u64 nsecs;
 
        WARN_ON(timekeeping_suspended);
@@ -763,7 +763,9 @@ ktime_t ktime_get(void)
 
        } while (read_seqcount_retry(&tk_core.seq, seq));
 
-       return ktime_add_ns(base, nsecs);
+       res = ktime_add_ns(base, nsecs);
+       __this_cpu_add(prandom_noise, (unsigned long)ktime_to_ns(res));
+       return res;
 }
 EXPORT_SYMBOL_GPL(ktime_get);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ