[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150923215447.GJ1747@two.firstfloor.org>
Date: Wed, 23 Sep 2015 23:54:47 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Rasmus Villemoes <linux@...musvillemoes.dk>
Cc: Andi Kleen <andi@...stfloor.org>, tytso@....edu,
linux-kernel@...r.kernel.org, kirill.shutemov@...ux.intel.com,
herbert@...dor.apana.org.au, Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH 1/3] Make /dev/urandom scalable
> > +{
> > + struct entropy_store *pool = &nonblocking_pool;
> > +
> > + /*
> > + * Non node 0 pools may take longer to initialize. Keep using
> > + * the boot nonblocking pool while this happens.
> > + */
> > + if (nonblocking_node_pool)
> > + pool = nonblocking_node_pool[numa_node_id()];
> > + if (!pool->initialized)
> > + pool = &nonblocking_pool;
> > + return pool;
> > +}
>
> I assume this can't get called concurrently with rand_initialize
> (otherwise pool may be NULL even if nonblocking_node_pool is non-NULL).
Yes. I can move the assignment to the global last and add a memory
barrier.
> > + char name[40];
> > +
> > + nonblocking_node_pool = kzalloc(num_nodes * sizeof(void *),
> > + GFP_KERNEL|__GFP_NOFAIL);
> > +
>
> Why kzalloc, when you immediately initialize all elements? New uses of
> __GFP_NOFAIL seem to be frowned upon. How hard would it be to just fall
> back to only using the single statically allocated pool?
It's already doing that.
>
> Does rand_initialize get called before or after other initialization
> code updates node_possible_map to reflect the actual possible number of
> nodes? If before, won't we be wasting a lot of memory (not to mention
> that we then might as well allocate all the nonblocking pools statically
> based on MAX_NUMNODES).
I'll check.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists