lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 5 Dec 2013 01:02:15 +0900
From:	Joonsoo Kim <js1304@...il.com>
To:	Christoph Lameter <cl@...ux.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Johannes Weiner <hannes@...xchg.org>,
	Michal Hocko <mhocko@...e.cz>, azurIt <azurit@...ox.sk>,
	Linux Memory Management List <linux-mm@...ck.org>,
	cgroups@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
	Christian Casteyde <casteyde.christian@...e.fr>,
	Pekka Enberg <penberg@...nel.org>
Subject: Re: [patch 2/2] fs: buffer: move allocation failure loop into the allocator

2013/12/5 Christoph Lameter <cl@...ux.com>:
> On Tue, 3 Dec 2013, Andrew Morton wrote:
>
>> >     page = alloc_slab_page(alloc_gfp, node, oo);
>> >     if (unlikely(!page)) {
>> >             oo = s->min;
>>
>> What is the value of s->min?  Please tell me it's zero.
>
> It usually is.
>
>> > @@ -1349,7 +1350,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
>> >             && !(s->flags & (SLAB_NOTRACK | DEBUG_DEFAULT_FLAGS))) {
>> >             int pages = 1 << oo_order(oo);
>> >
>> > -           kmemcheck_alloc_shadow(page, oo_order(oo), flags, node);
>> > +           kmemcheck_alloc_shadow(page, oo_order(oo), alloc_gfp, node);
>>
>> That seems reasonable, assuming kmemcheck can handle the allocation
>> failure.
>>
>>
>> Still I dislike this practice of using unnecessarily large allocations.
>> What does it gain us?  Slightly improved object packing density.
>> Anything else?
>
> The fastpath for slub works only within the bounds of a single slab page.
> Therefore a larger frame increases the number of allocation possible from
> the fastpath without having to use the slowpath and also reduces the
> management overhead in the partial lists.

Hello Christoph.

Now we have cpu partial slabs facility, so I think that slowpath isn't really
slow. And it doesn't much increase the management overhead in the node
partial lists, because of cpu partial slabs.

And larger frame may cause more slab_lock contention or cmpxchg contention
if there are parallel freeings.

But, I don't know which one is better. Is larger frame still better? :)

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ