lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 11 Aug 2020 03:47:24 +0000
From:   George Spelvin <lkml@....ORG>
To:     Willy Tarreau <w@....eu>
Cc:     netdev@...r.kernel.org, aksecurity@...il.com,
        torvalds@...ux-foundation.org, edumazet@...gle.com,
        Jason@...c4.com, luto@...nel.org, keescook@...omium.org,
        tglx@...utronix.de, peterz@...radead.org, tytso@....edu,
        lkml.mplumb@...il.com, stephen@...workplumber.org, fw@...len.de,
        George Spelvin <lkml@....ORG>
Subject: Re: [DRAFT PATCH] random32: make prandom_u32() output unpredictable

On Mon, Aug 10, 2020 at 01:47:00PM +0200, Willy Tarreau wrote:
> except that I retrieve it only on 1/8 calls
> and use the previous noise in this case.

Er... that's quite different.  I was saying you measure them all, and do:

 struct siprand_state {
 	...
+	uint32_t noise[i];
+	unsigned counter;
 }
 	...
+	s->noise[--s->counter] = random_get_entropy();
+
+	if (!s->counter) {
+		for (i = 0; i < 4; i++)
+			s->v[i] += s->noise[2*i] +
+				((unsigned long)s->noise[2*i+1] << BITS_PER_LONG/2);
+		s->counter = 8;
+	}

What you're doing is just decreasing the amount of seeding by a factor
of 8.  (Roughly.  You do gain log2(8)/2 = 1.5 bits because the sum of
8 random values has a standard deviation sqrt(8) times as large as
the inputs.)

> diff --git a/lib/random32.c b/lib/random32.c
> index 2b048e2ea99f..a12d63028106 100644
> --- a/lib/random32.c
> +++ b/lib/random32.c
> @@ -317,6 +317,8 @@ static void __init prandom_state_selftest(void)
>  
>  struct siprand_state {
>  	unsigned long v[4];
> +	unsigned long noise;
> +	unsigned long count;
>  };
>  
>  static DEFINE_PER_CPU(struct siprand_state, net_rand_state) __latent_entropy;
> @@ -334,7 +336,7 @@ static DEFINE_PER_CPU(struct siprand_state, net_rand_state) __latent_entropy;
>  #define K0 (0x736f6d6570736575 ^ 0x6c7967656e657261 )
>  #define K1 (0x646f72616e646f6d ^ 0x7465646279746573 )
>  
> -#elif BITS_PER_LONG == 23
> +#elif BITS_PER_LONG == 32
>  /*
>   * On 32-bit machines, we use HSipHash, a reduced-width version of SipHash.
>   * This is weaker, but 32-bit machines are not used for high-traffic
> @@ -375,6 +377,12 @@ static u32 siprand_u32(struct siprand_state *s)
>  {
>  	unsigned long v0 = s->v[0], v1 = s->v[1], v2 = s->v[2], v3 = s->v[3];
>  
> +	if (++s->count >= 8) {
> +		v3 ^= s->noise;
> +		s->noise += random_get_entropy();
> +		s->count = 0;
> +	}
> +

- Can you explain why you save the "noise" until next time?  Is this meant to
  make it harder for an attacker to observe the time?
- How about doing away with s->count and making it statistical:

+	if ((v3 & 7) == 0)
+		v3 ^= random_get_entropy();

That still does the seed 1/8 of the time, but in a much less regular pattern.
(Admittedly, it will totally break the branch predictor.  An unlikely()
might help.)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ