lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1341515412.4020.1230.camel@calx>
Date:	Thu, 05 Jul 2012 14:10:12 -0500
From:	Matt Mackall <mpm@...enic.com>
To:	Theodore Ts'o <tytso@....edu>
Cc:	Linux Kernel Developers List <linux-kernel@...r.kernel.org>,
	torvalds@...ux-foundation.org, w@....eu, ewust@...ch.edu,
	zakir@...ch.edu, greg@...ah.com, nadiah@...ucsd.edu,
	jhalderm@...ch.edu, tglx@...utronix.de, davem@...emloft.net
Subject: Re: [PATCH 02/10] random: use lockless techniques when mixing
 entropy pools

On Thu, 2012-07-05 at 14:12 -0400, Theodore Ts'o wrote:
> The real-time Linux folks didn't like add_interrupt_randomness()
> taking a spinlock since it is called in the low-level interrupt
> routine.  Using atomic_t's and cmpxchg is also too expensive on some
> of the older architectures.  So we'll bite the bullet and use
> ACCESS_ONCE() and smp_rmb()/smp_wmb() to minimize the race windows
> when mixing in the entropy pool.

I don't think this will work correctly. It's important that simultaneous
_readers_ of the state get different results. Otherwise, you can get
things like duplicate UUIDs generated on different cores, something
that's been observed in the field(!). I thought I added a comment to
that effect some years back, but I guess not.

This means at a bare minimum, you need an atomic operation like a
cmpxchg on some component like input_rotate. Per-cpu mix pointers also
won't work as they can accidentally align. Per-cpu secret pads will
probably work, however, though it creates an interesting initialization
problem.

On the other hand, you don't care about any of this when not extracting
and you can be as fast and loose as you'd like.

> +	input_rotate = ACCESS_ONCE(r->input_rotate);
> +	i = ACCESS_ONCE(r->add_ptr);
>  
>  	/* mix one byte at a time to simplify size handling and churn faster */
>  	while (nbytes--) {
> @@ -514,19 +514,19 @@ static void mix_pool_bytes_extract(struct entropy_store *r, const void *in,
>  		input_rotate += i ? 7 : 14;
>  	}
>  
> -	r->input_rotate = input_rotate;
> -	r->add_ptr = i;
> +	ACCESS_ONCE(r->input_rotate) = input_rotate;
> +	ACCESS_ONCE(r->add_ptr) = i;
> +	local_irq_restore(flags);
> +	smp_wmb();
>  
>  	if (out)
>  		for (j = 0; j < 16; j++)
>  			((__u32 *)out)[j] = r->pool[(i - j) & wordmask];

-- 
Mathematics is the supreme nostalgia of our time.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ