[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1501271054310.25124@gentwo.org>
Date: Tue, 27 Jan 2015 10:57:36 -0600 (CST)
From: Christoph Lameter <cl@...ux.com>
To: Joonsoo Kim <iamjoonsoo.kim@....com>
cc: akpm@...uxfoundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, penberg@...nel.org, iamjoonsoo@....com,
Jesper Dangaard Brouer <brouer@...hat.com>
Subject: Re: [RFC 1/3] Slab infrastructure for array operations
On Tue, 27 Jan 2015, Joonsoo Kim wrote:
> IMHO, exposing these options is not a good idea. It's really
> implementation specific. And, this flag won't show consistent performance
> according to specific slab implementation. For example, to get best
> performance, if SLAB is used, GFP_SLAB_ARRAY_LOCAL would be the best option,
> but, for the same purpose, if SLUB is used, GFP_SLAB_ARRAY_NEW would
> be the best option. And, performance could also depend on number of objects
> and size.
Why would slab show a better performance? SLUB also can have partial
allocated pages per cpu and could also get data quite fast if only a
minimal number of objects are desired. SLAB is slightly better because the
number of cachelines touches stays small due to the arrangement of the freelist
on the slab page and the queueing approach that does not involve linked
lists.
GFP_SLAB_ARRAY new is best for large quantities in either allocator since
SLAB also has to construct local metadata structures.
> And, overriding gfp flag isn't a good idea. Someday gfp could use
> these values and they can't notice that these are used in slab
> subsystem with different meaning.
We can put a BUILD_BUG_ON in there to ensure that the GFP flags do not get
too high. The upper portion of the GFP flags is also used elsewhere. And
it is an allocation option so it naturally fits in there.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists