[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <461ED00E.1090808@yahoo.com.au>
Date: Fri, 13 Apr 2007 10:34:22 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Rik van Riel <riel@...hat.com>
CC: Eric Dumazet <dada1@...mosbay.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Ulrich Drepper <drepper@...hat.com>
Subject: Re: [PATCH] make MADV_FREE lazily free memory
Rik van Riel wrote:
> Nick Piggin wrote:
>
>>> The lazy freeing is aimed at avoiding page faults on memory
>>> that is freed and later realloced, which is quite a common
>>> thing in many workloads.
>>
>>
>> I would be interested to see how it performs and what these
>> workloads look like, although we do need to fix the basic glibc and
>> madvise locking problems first.
>
>
> The attached graph are results of running the MySQL sysbench
> workload on my quad core system. As you can see, performance
> with #threads == #cpus (4) almost doubles from 1070 transactions
> per second to 2014 transactions/second.
>
> On the high end (16 threads on 4 cpus), performance increases
> from 778 transactions/second on vanilla to 1310 transactions/second.
>
> I have also benchmarked running Ulrich's changed glibc on a vanilla
> kernel, which gives results somewhere in-between, but much closer to
> just the vanilla kernel.
Looks like the idle time issue is still biting for those guys.
Hmm, maybe MySQL is actually _touching_ the memory inside a more
critical lock, so the faults get tangled up on mmap_sem there. I
wonder if making malloc call memset right afterwards would hide
that ;) Or the madvise exclusive mmap_sem avoidance.
Seems like with perfect scaling we should get to the 2400 mark.
It would be nice to be able to not degrade under load. Of course
some of that will be MySQL scaling issues.
--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists