[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAmzW4M8drwRPy_qWxnkG3-GKGPq+m24me+pGOWNtPzA15iVfg@mail.gmail.com>
Date: Tue, 16 Oct 2012 10:28:39 +0900
From: JoonSoo Kim <js1304@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: David Rientjes <rientjes@...gle.com>,
Andi Kleen <andi@...stfloor.org>,
Ezequiel Garcia <elezegarcia@...il.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mm@...ck.org, Tim Bird <tim.bird@...sony.com>,
celinux-dev@...ts.celinuxforum.org
Subject: Re: [Q] Default SLAB allocator
Hello, Eric.
2012/10/14 Eric Dumazet <eric.dumazet@...il.com>:
> SLUB was really bad in the common workload you describe (allocations
> done by one cpu, freeing done by other cpus), because all kfree() hit
> the slow path and cpus contend in __slab_free() in the loop guarded by
> cmpxchg_double_slab(). SLAB has a cache for this, while SLUB directly
> hit the main "struct page" to add the freed object to freelist.
Could you elaborate more on how 'netperf RR' makes kernel "allocations
done by one cpu, freeling done by other cpus", please?
I don't have enough background network subsystem, so I'm just curious.
> I played some months ago adding a percpu associative cache to SLUB, then
> just moved on other strategy.
>
> (Idea for this per cpu cache was to build a temporary free list of
> objects to batch accesses to struct page)
Is this implemented and submitted?
If it is, could you tell me the link for the patches?
Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists