[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170121062432.kf7xnu56g7vvh4ky@thunk.org>
Date: Sat, 21 Jan 2017 01:24:32 -0500
From: Theodore Ts'o <tytso@....edu>
To: "Jason A. Donenfeld" <Jason@...c4.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
Andy Lutomirski <luto@...capital.net>
Subject: Re: [PATCH 1/2] random: use chacha20 for get_random_int/long
On Sat, Jan 21, 2017 at 01:16:56AM +0100, Jason A. Donenfeld wrote:
> On Sat, Jan 21, 2017 at 1:15 AM, Theodore Ts'o <tytso@....edu> wrote:
> > But there is a shared pointer, which is used both for the dedicated
> > u32 array and the dedicated u64 array. So when you increment the
> > pointer for the get_random_u32, the corresponding entry in the u64
> > array is wasted, no?
>
> No, it is not a shared pointer. It is a different pointer with a
> different batch. The idea is that each function gets its own batch.
> That way there's always perfect alignment. This is why I'm suggesting
> that my approach is faster.
Oh, I see. What was confusing me was that you used the same data
structure for both, and but you were using different instances of the
structure for get_random_u32 and get_random_u64. I thought you were
using the same batched_entropy structure for both. My bad.
I probably would have used different structure definitions for both,
but that's probably because I really am not fund of unions at all if
they can be avoided. I thought you were using a union because you
were deliberately trying to use one instance of the structure as a per
cpu variable for u32 and u64.
So that's not how I would do things, but it's fine.
- Ted
Powered by blists - more mailing lists