[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4615A22A.7040909@yahoo.com.au>
Date: Fri, 06 Apr 2007 11:28:10 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Rik van Riel <riel@...hat.com>
CC: Ulrich Drepper <drepper@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Jakub Jelinek <jakub@...hat.com>,
Linux Memory Management <linux-mm@...ck.org>
Subject: Re: missing madvise functionality
Rik van Riel wrote:
> Nick Piggin wrote:
>
>> Oh, also: something like this patch would help out MADV_DONTNEED, as it
>> means it can run concurrently with page faults. I think the locking will
>> work (but needs forward porting).
>
>
> Ironically, your patch decreases throughput on my quad core
> test system, with Jakub's test case.
>
> MADV_DONTNEED, my patch, 10000 loops (14k context switches/second)
>
> real 0m34.890s
> user 0m17.256s
> sys 0m29.797s
>
>
> MADV_DONTNEED, my patch & your patch, 10000 loops (50 context
> switches/second)
>
> real 1m8.321s
> user 0m20.840s
> sys 1m55.677s
>
> I suspect it's moving the contention onto the page table lock,
> in zap_pte_range(). I guess that the thread private memory
> areas must be living right next to each other, in the same
> page table lock regions :)
>
> For more real world workloads, like the MySQL sysbench one,
> I still suspect that your patch would improve things.
I think it definitely would, because the app will be wanting to
do other things with mmap_sem as well (like futexes *grumble*).
Also, the test case is allocating and freeing 512K chunks, which
I think would be on the high side of typical.
You have 32 threads for 4 CPUs, so then it would actually make
sense to context switch on mmap_sem write lock rather than spin
on ptl. But the kernel doesn't know that.
Testing with a small chunk size or thread == CPUs I think would
show a swing toward my patch.
--
SUSE Labs, Novell Inc.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists