lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140614072312.27656.qmail@ns.horizon.com>
Date:	14 Jun 2014 03:23:12 -0400
From:	"George Spelvin" <linux@...izon.com>
To:	linux@...izon.com, tytso@....edu
Cc:	hpa@...ux.intel.com, linux-kernel@...r.kernel.org,
	mingo@...nel.org, price@....edu
Subject: Re: [RFC] random: is the IRQF_TIMER test working as intended?

> In general, yes.  It's intended this way.  I'm trying to be extremely
> conservative with my entropy measurements, and part of it is because
> there is generally a huge amount of interrupts available, at least on
> desktop systems, and I'd much rather be very conservative than not.

To be absolutely clear: being more aggressive is not the point.

Using 1/8 of a bit per sample was simply for convenience, to keep the
patch smaller.  It can be easily adapted to be strictly more conservative.

Consider the changes that make it more conservative:
- Allow credit of less than 1 bit
- If we get interrupts very rarely, credit *less* entropy.
- Only allow credit for one side of a timer interrupt, not both.  If t2-t1
  is too predictable, then x-t1 has all of the entropy that's available.
  t2-x provides no new information.

> What I'd probably do instead is to count the number of timer
> interrupts, and if it's more than 50% time interrupts, give 0 bits of
> credit, else give 1 bit of credit each time we push from the fast pool
> to the input pool.  Yes, that's being super conservative.

If we're down in the 0/1 range, I really like the idea of allowing
fractional credit.  How about crediting 1/64 of a bit per non-timer
interrupt?  Equivalent result, but more linear.

(Sorry if my digression about the sanity of 1/8 bit per sample confused
things.  I was just trying to say "it's not totally crazy", not "you should
do this".)

>> 1) Since the number of samples between spills to the input pool is
>>    variable (with > 64 samples now possible due to the trylock), wouldn't
>>    it make more sense to accumulate an entropy estimate?

> In general, we probably will only retry a few times, so it's not
> worth it.

I'm not actually worrying about the "too many samples" case, but the
"too few".  The worrisome case is when someone on an energy-saving quest
succeeds in tuning the kernel (or just this particular processor) so it
gets less than 1 interrupt per second.  Every interrupt credits 1 bit
of entropy.  Is *that* super-conservative?

I agree that longer delays have more jitter, so it's worth a little
bit more, but shouldn't we try to get a curve the same shape as reality
and *then* apply the safety factors?  Surely the presence or absence of
intermediate samples makes *some* difference?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ