lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 30 Nov 2009 12:44:16 -0600
From:	Matt Mackall <mpm@...enic.com>
To:	Ian Molton <ian.molton@...labora.co.uk>
Cc:	Rusty Russell <rusty@...tcorp.com.au>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] hw_random: core updates to allow more efficient
 drivers

On Mon, 2009-11-30 at 10:28 +0000, Ian Molton wrote:
> Rusty Russell wrote:
> 
> > And might as well just #defube RNG_BUFFSIZE SMP_CACHE_BYTES (or use
> > SMP_CACHE_BYTES here and sizeof() elsewhere).
> 
> This can lead to a rather small (4 byte) buffer on some systems, however
> I don't know if in practice a tiny buffer or a big one would be better
> for performance on those machines. I guess if its a problem someone can
> patch the code to allocate a minimum of (say) 16 bytes in future...

Hmmm, I think this was bad advice from Rusty.

The goal is to size and align the buffer so that we know it will always
work. Thus 64 bytes (always big enough but not so big that anyone will
complain) and cache aligned (makes stupid things like Via Padlock happy
-on Vias-). 

Rusty's suggestion could easily have us in trouble if some driver wants
to hand us a mere 64 bits on an architecture with 4-byte cache alignment
but is otherwise perfectly happy with 64-bit stores. 

-- 
http://selenic.com : development and support for Mercurial and Linux


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ