[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <461492A5.1030905@cosmosbay.com>
Date: Thu, 05 Apr 2007 08:09:41 +0200
From: Eric Dumazet <dada1@...mosbay.com>
To: Nick Piggin <nickpiggin@...oo.com.au>
CC: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Andrew Morton <akpm@...ux-foundation.org>,
Jakub Jelinek <jakub@...hat.com>,
Ulrich Drepper <drepper@...hat.com>,
Andi Kleen <andi@...stfloor.org>,
Rik van Riel <riel@...hat.com>,
Linux Kernel <linux-kernel@...r.kernel.org>,
linux-mm@...ck.org, Hugh Dickins <hugh@...itas.com>
Subject: Re: missing madvise functionality
Nick Piggin a écrit :
> Eric Dumazet wrote:
> >> This was not a working patch, just to throw the idea, since the
>> answers I got showed I was not understood.
>>
>> In this case, find_extend_vma() should of course have one struct
>> vm_area_cache * argument, like find_vma()
>>
>> One single cache on one mm is not scalable. oprofile badly hits it on
>> a dual cpu config.
>
> Oh, what sort of workload are you using to show this? The only reason
> that I
> didn't submit my thread cache patches was that I didn't show a big enough
> improvement.
>
Database workload, where the user multi threaded app is constantly accessing
GBytes of data, so L2 cache hit is very small. If you want to oprofile it,
with say a CPU_CLK_UNHALTED:5000 event, then find_vma() is in the top 5.
Each time oprofile has an NMI, it calls find_vma(EIP/RIP) and blows out the
target process cache (usually plugged on the data vma containing user land
futexes). Event with private futexes, it will probably be plugged on the brk()
vma.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists