lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <200904090024.42951.rgetz@blackfin.uclinux.org>
Date:	Thu, 9 Apr 2009 00:24:42 -0400
From:	Robin Getz <rgetz@...ckfin.uclinux.org>
To:	"Chris Friesen" <cfriesen@...tel.com>
CC:	"Gilles Espinasse" <g.esp@...e.fr>,
	"Chris Peterson" <cpeterso@...terso.com>,
	"Matt Mackall" <mpm@...enic.com>, netdev@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: IRQF_SAMPLE_RANDOM question...

On Wed 8 Apr 2009 19:16, Chris Friesen pondered:
> Gilles Espinasse wrote:
> 
> > Readme say :
> > "This daemon attempts to collect real randomness from fluctuations of
> > high-frequency clocks on a PC's mainboard. The basic assumption is that
> > mainboard and CPU are clocked by two separate physical clocks."
> 
> > How large is this basic assumption true, on x86, on other arch?
> 
> Isn't the cpu frequency normally a phase-locked multiple of the 
> mainboard bus frequency?

Yes - typically they are the same.

However - I have tested clrngd out on a Blackfin, and found it gave an 
excessively high load - but it did give ok results. 77% of the time (659/848 
times) it provided results that passed it's built in FIPS test. It did die a 
few times (if the FIPS tests fails 5 times in a row clrngd aborts).

I was going to write my own (based on a similar architecture) - but use the 
RTC clock and the main clock - since those actually would be different 
physical crystals - and the accuracy of low cost 32kHz crystals is crappy  
(typically measureable with a high enough core clock).

But I think delays of cache misses/flushes will dominate things anyway - which 
is why clrngd works today on systems which are using the same clock source. 
(but since it will be RTC interrupt driven, vs while(1){} like clrngd - the 
load will be much lower).

-Robin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ