[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <462C8560.7060703@yahoo.com.au>
Date: Mon, 23 Apr 2007 20:07:28 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Nick Piggin <nickpiggin@...oo.com.au>
CC: Rik van Riel <riel@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>, shak <dshaks@...hat.com>,
jakub@...hat.com, drepper@...hat.com
Subject: Re: [PATCH] lazy freeing of memory through MADV_FREE
Nick Piggin wrote:
> Rik van Riel wrote:
>
>> I've added a 5th column, with just your mmap_sem patch and
>> without my madv_free patch. It is run with the glibc patch,
>> which should make it fall back to MADV_DONTNEED after the
>> first MADV_FREE call fails.
>
>
> Thanks! (I edited slightly so it doesn't wrap)
>
>
>> vanilla new glibc madv_free mmap_sem both
>> threads
>>
>> 1 610 609 596 534 545
>> 2 1032 1136 1196 1180 1200
>> 4 1070 1128 2014 2027 2024
>> 8 1000 1088 1665 2089 2087
>> 16 779 1073 1310 2012 1999
>>
>>
>> Not doing the mprotect calls is the big one I guess, especially
>> the fact that we don't need to take the mmap_sem for writing.
>
>
> Yes.
>
>
>> With both our patches, single and two thread performance with
>> MySQL sysbench is somewhat better than with just your patch,
>> 4 and 8 thread performance are basically the same and just
>> your patch gives a slight benefit with 16 threads.
>>
>> I guess I should benchmark up to 64 or 128 threads tomorrow,
>> to see if this is just luck or if the cache benefit of doing
>> the page faults and reusing hot pages is faster than not
>> having page faults at all.
>>
>> I should run some benchmarks on other systems, too. Some of
>> these results could be an artifact of my quad core CPU. The
>> results could be very different on other systems...
>
>
> I'm getting the 16 core box out of retirement as we speak :)
>
OK, 10 runs at 1 client, 2.6.21-rc6, MySQL version 5.33, and new
Jakub's glibc gives a 99.9% confidence of:
vanilla: 467.2 +/- 7.9 (tps)
mmap_sem: 470.5 +/- 9.3 (tps)
However, it seems those means jump around a bit from boot to boot,
so there could be some some memory placement luck for cache and/or
NUMA goodness involved.
So I think it is safe to say that the mmap_sem patch doesn't hurt
single threaded performance (from looking at the numbers and the
patch). And that's the most important thing for that patch.
I'll post some scalability results tomorrow. From my first round
of tests, after new glibc and the mmap_sem patch, it doesn't seem
like rwsem improvements, private futexes, or avoiding zero_page
make any significant differences.
I haven't tested your MADV_FREE patch yet.
--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists