[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <462C2EDE.4090805@yahoo.com.au>
Date: Mon, 23 Apr 2007 13:58:22 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Rik van Riel <riel@...hat.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>, shak <dshaks@...hat.com>,
jakub@...hat.com, drepper@...hat.com
Subject: Re: [PATCH] lazy freeing of memory through MADV_FREE
Rik van Riel wrote:
> I've added a 5th column, with just your mmap_sem patch and
> without my madv_free patch. It is run with the glibc patch,
> which should make it fall back to MADV_DONTNEED after the
> first MADV_FREE call fails.
Thanks! (I edited slightly so it doesn't wrap)
> vanilla new glibc madv_free mmap_sem both
> threads
>
> 1 610 609 596 534 545
> 2 1032 1136 1196 1180 1200
> 4 1070 1128 2014 2027 2024
> 8 1000 1088 1665 2089 2087
> 16 779 1073 1310 2012 1999
>
>
> Not doing the mprotect calls is the big one I guess, especially
> the fact that we don't need to take the mmap_sem for writing.
Yes.
> With both our patches, single and two thread performance with
> MySQL sysbench is somewhat better than with just your patch,
> 4 and 8 thread performance are basically the same and just
> your patch gives a slight benefit with 16 threads.
>
> I guess I should benchmark up to 64 or 128 threads tomorrow,
> to see if this is just luck or if the cache benefit of doing
> the page faults and reusing hot pages is faster than not
> having page faults at all.
>
> I should run some benchmarks on other systems, too. Some of
> these results could be an artifact of my quad core CPU. The
> results could be very different on other systems...
I'm getting the 16 core box out of retirement as we speak :)
--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists