lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	13 Oct 2015 03:50:21 -0400
From:	"George Spelvin" <linux@...izon.com>
To:	ahferroin7@...il.com, andi@...stfloor.org, jepler@...ythonic.net,
	linux-kernel@...r.kernel.org, linux@...izon.com,
	linux@...musvillemoes.dk, shentino@...il.com, tytso@....edu
Subject: Re: Updated scalable urandom patchkit

> This might be stupid, but could something asynchronous work?  Perhaps 
> have the entropy generators dump their entropy into a central pool via 
> a cycbuf, and have a background kthread manage the per-cpu or 
> per-process entropy pools?

No for two reasons:

(Minor): One of the functions of the mixback is to ensure that the next
	reader hashes a *different* pool state.  If the mixback is
	delayed, the next reader might hash the *same* pool state and
	return the same numbers.  (There are easy workarounds for this.)
(Major): What do you do when the circular buffer is full?  If it's not safe
	to skip the mixback, then we have to block and get into the same
	lock-contention problem.


But... this suggestion of having a separate thread do the mixback gives
me an idea.  In fact, I think it's a good idea.

Ted (or anyone else listening), what do you think of the following?
I think it would solve Andi's problem and be a smaller code change than
the abuse mitigation mode.  (Which is still a good idea, but is off the
critical path.)

- Associated with a pool is an atomic "mixback needed" flag.
- Also an optional mixback buffer.  (Optional because the mixback
  could just recompute it.)

- Dropping the lock requires the following operations:
  - Test the mixback needed flag.  If set,
    - Copy out and wipe the buffer,
    - smp_wmb()
    - Clear the flag
    - smp_wmb()
    - Do the mixback, and
    - Re-check again before dropping the lock.
    (This check before dropping the lock is technically an optional
    optimization.)
  - Drop the lock.
  - smp_mb()	(Since it's write-to-read, we can't use _rmb or _wmb.)
  - Test the mixback pending flag again.
  - If it's set, trylock().  If that succeeds, go do the mixback as above.
  - If it fails, return.

Each reader uses already-discussed nonce techniques to safely do concurrent
reads from the same pool.  Then, at the end:
- (Optional) trylock() and, if it succeeds,
  do mixback directly.
- Copy our mixback data to the buffer (race conditions be damned)
- smp_wmb()
- set the mixback needed flag
- smp_mb()	(Since it's write-to-read; or use smp_store_mb())
- trylock()
  - If that fails. return
  - If that succeeds (and the flag is still set) do the mixback

This is based on the fact that if there are multiple concurrent reads,
we only need one mixback (thus, only one buffer/flag), but the "last one
out the door" has to do it.

Also, we don't care if we mis-count and end up doing it twice.

Each reader sets the flag and *then* does a trylock.  If the trylock fails,
it's guaranteed that the lock-holder will see the flag and take care of
the mixback for us.

The writers drop the lock and *then* test the flag.


The result is that readers *never* do a blocking acquire of the pool
lock.  Which should solve all the contention problems.  Andi's stupid
application will still be stupid, but won't fall off a locking cliff.


(We could also use w[5] as the "mixback needed" flag and just
force it to 1 on the off chance it's zero with negligible loss
of entropy and zero security loss.)


The one thing I worry about is livelock keeping one thread in the
mixback code indefinitely, which can be mitigated by dropping the lock
and waiting before re-testing and re-acquiring if we loop too often.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists