[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <492E3DEF.8030602@cosmosbay.com>
Date: Thu, 27 Nov 2008 07:27:59 +0100
From: Eric Dumazet <dada1@...mosbay.com>
To: Christoph Lameter <cl@...ux-foundation.org>
CC: Ingo Molnar <mingo@...e.hu>, David Miller <davem@...emloft.net>,
"Rafael J. Wysocki" <rjw@...k.pl>, linux-kernel@...r.kernel.org,
kernel-testers@...r.kernel.org, Mike Galbraith <efault@....de>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Linux Netdev List <netdev@...r.kernel.org>,
Christoph Hellwig <hch@...radead.org>
Subject: Re: [PATCH 0/6] fs: Scalability of sockets/pipes allocation/deallocation
on SMP
Christoph Lameter a écrit :
> On Thu, 27 Nov 2008, Eric Dumazet wrote:
>
>> The last point is about SLUB being hit hard, unless we
>> use slub_min_order=3 at boot, or we use Christoph Lameter
>> patch (struct file RCU optimizations)
>> http://thread.gmane.org/gmane.linux.kernel/418615
>>
>> If we boot machine with slub_min_order=3, SLUB overhead disappears.
>
>
> I'd rather not be that drastic. Did you try increasing slub_min_objects
> instead? Try 40-100. If we find the right number then we should update
> the tuning to make sure that it pickes the right slab page sizes.
>
>
4096/192 = 21
with slub_min_objects=22 :
# cat /sys/kernel/slab/filp/order
1
# time ./socket8
real 0m1.725s
user 0m0.685s
sys 0m12.955s
with slub_min_objects=45 :
# cat /sys/kernel/slab/filp/order
2
# time ./socket8
real 0m1.652s
user 0m0.694s
sys 0m12.367s
with slub_min_objects=80 :
# cat /sys/kernel/slab/filp/order
3
# time ./socket8
real 0m1.642s
user 0m0.719s
sys 0m12.315s
I would say slub_min_objects=45 is the optimal value on 32bit arches to
get acceptable performance on this workload (order=2 for filp kmem_cache)
Note : SLAB here is disastrous, but you already knew that :)
real 0m8.128s
user 0m0.748s
sys 1m3.467s
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists