[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <492EF0C2.4090108@google.com>
Date: Thu, 27 Nov 2008 11:10:58 -0800
From: Mike Waychison <mikew@...gle.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
CC: Nick Piggin <npiggin@...e.de>, Ying Han <yinghan@...gle.com>,
Ingo Molnar <mingo@...e.hu>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, akpm <akpm@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
Rohit Seth <rohitseth@...gle.com>,
Hugh Dickins <hugh@...itas.com>,
"H. Peter Anvin" <hpa@...or.com>, edwintorok@...il.com
Subject: Re: [RFC v1][PATCH]page_fault retry with NOPAGE_RETRY
Peter Zijlstra wrote:
> On Thu, 2008-11-27 at 01:28 -0800, Mike Waychison wrote:
>
>> Correct. I don't recall the numbers from the pathelogical cases we were
>> seeing, but iirc, it was on the order of 10s of seconds, likely
>> exascerbated by slower than usual disks. I've been digging through my
>> inbox to find numbers without much success -- we've been using a variant
>> of this patch since 2.6.11.
>
>> We generally try to avoid such things, but sometimes it a) can't be
>> easily avoided (third party libraries for instance) and b) when it hits
>> us, it affects the overall health of the machine/cluster (the monitoring
>> daemons get blocked, which isn't very healthy).
>
> If its only monitoring, there might be another solution. If you can keep
> the required data in a separate (approximate) copy so that you don't
> need mmap_sem at all to show them.
>
> If your mmap_sem is so contended your latencies are unacceptable, adding
> more users to it - even statistics gathering, just isn't going to cure
> the situation.
>
> Furthermore, /proc code usually isn't written with performance in mind,
> so its usually simple and robust code. Adding it to a 'hot'-path like
> you're doing doesn't seem advisable.
>
> Also, releasing and re-acquiring mmap_sem can significantly add to the
> cacheline bouncing that thing already has.
>
This is much less of a worry. We expect to be able to look at these
things on the order of 1HZ, so cacheline bouncing becomes negligible.
Latency to lock acquire however hurts and is silly considering it's just
another reader. Our monitoring software here is acting as a litmus test
and the real pain is felt by other threads in the same process who are
also blocked trying to acquire the read lock.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists