[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2f11576a0808130714k2cd031c4nd6eea3506831cac9@mail.gmail.com>
Date: Wed, 13 Aug 2008 23:14:30 +0900
From: "KOSAKI Motohiro" <kosaki.motohiro@...fujitsu.com>
To: "Christoph Lameter" <cl@...ux-foundation.org>
Cc: "Matthew Wilcox" <matthew@....cx>,
"Pekka Enberg" <penberg@...helsinki.fi>, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
"Mel Gorman" <mel@...net.ie>, andi@...stfloor.org,
"Rik van Riel" <riel@...hat.com>
Subject: Re: No, really, stop trying to delete slab until you've finished making slub perform as well
>> :t-0000128 28739 128 1.3G 20984/20984/8 512 0 99 0 *
>
> Argh. Most slabs contain a single object. Probably due to the conflict resolution.
agreed with the issue exist in lock contention code.
> The obvious fix is to avoid allocating another slab on conflict but how will
> this impact performance?
>
>
> Index: linux-2.6/mm/slub.c
> ===================================================================
> --- linux-2.6.orig/mm/slub.c 2008-08-13 08:06:00.000000000 -0500
> +++ linux-2.6/mm/slub.c 2008-08-13 08:07:59.000000000 -0500
> @@ -1253,13 +1253,11 @@
> static inline int lock_and_freeze_slab(struct kmem_cache_node *n,
> struct page *page)
> {
> - if (slab_trylock(page)) {
> - list_del(&page->lru);
> - n->nr_partial--;
> - __SetPageSlubFrozen(page);
> - return 1;
> - }
> - return 0;
> + slab_lock(page);
> + list_del(&page->lru);
> + n->nr_partial--;
> + __SetPageSlubFrozen(page);
> + return 1;
> }
I don't mesure it yet. I don't like this patch.
maybe, it decrease other typical benchmark.
So, I think better way is
1. slab_trylock(), if success goto 10.
2. check fragmentation ratio, if low goto 10
3. slab_lock()
10. return func
I think this way doesn't cause performance regression.
because high fragmentation cause defrag and compaction lately.
So, prevent fragmentation often increase performance.
Thought?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists