lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 27 Nov 2008 11:22:57 -0800
From:	Mike Waychison <mikew@...gle.com>
To:	Nick Piggin <npiggin@...e.de>
CC:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Ying Han <yinghan@...gle.com>, Ingo Molnar <mingo@...e.hu>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	akpm <akpm@...ux-foundation.org>,
	David Rientjes <rientjes@...gle.com>,
	Rohit Seth <rohitseth@...gle.com>,
	Hugh Dickins <hugh@...itas.com>,
	"H. Peter Anvin" <hpa@...or.com>, edwintorok@...il.com
Subject: Re: [RFC v1][PATCH]page_fault retry with NOPAGE_RETRY

Nick Piggin wrote:
> On Thu, Nov 27, 2008 at 11:00:07AM +0100, Peter Zijlstra wrote:
>> On Thu, 2008-11-27 at 01:28 -0800, Mike Waychison wrote:
>>
>>> Correct.  I don't recall the numbers from the pathelogical cases we were 
>>> seeing, but iirc, it was on the order of 10s of seconds, likely 
>>> exascerbated by slower than usual disks.  I've been digging through my 
>>> inbox to find numbers without much success -- we've been using a variant 
>>> of this patch since 2.6.11.
>>> We generally try to avoid such things, but sometimes it a) can't be 
>>> easily avoided (third party libraries for instance) and b) when it hits 
>>> us, it affects the overall health of the machine/cluster (the monitoring 
>>> daemons get blocked, which isn't very healthy).
>> If its only monitoring, there might be another solution. If you can keep
>> the required data in a separate (approximate) copy so that you don't
>> need mmap_sem at all to show them.
>>
>> If your mmap_sem is so contended your latencies are unacceptable, adding
>> more users to it - even statistics gathering, just isn't going to cure
>> the situation.
>>
>> Furthermore, /proc code usually isn't written with performance in mind,
>> so its usually simple and robust code. Adding it to a 'hot'-path like
>> you're doing doesn't seem advisable.
>>
>> Also, releasing and re-acquiring mmap_sem can significantly add to the
>> cacheline bouncing that thing already has.
> 
> Yes, it would be nice to reduce mmap_sem load regardless of any other
> fixes or problems. I guess they're not very worried about cacheline
> bouncing but more about hold time (how many sockets in these systems?
> 4 at most?)
> 
> I guess it is the pagemap stuff that they use most heavily?
> 

We aren't using pagemap yet.  Reading /proc/pid/maps alone hurts.

> pagemap_read looks like it can use get_user_pages_fast. The smaps and
> clear_refs stuff might have been nicer if they could work on ranges
> like pagemap. Then they could avoid mmap_sem as well (although maps
> would need to be sampled and take mmap_sem I guess).
> 
> One problem with dropping mmap_sem is that it hurts priority/fairness.
> And it opens a bit of a (maybe theoretical but not something to completely
> ignore) forward progress hole AFAIKS. If mmap_sem is very heavily
> contended, then the refault is going to take a while to get through,
> and then the page might get reclaimed etc).

Right, this can be an issue.  The way around it should be to minimize 
the length of time any single lock holder can sit on it.  Compared to 
what we have today with:

   - sleep in major fault with read lock held,
   - enqueue writer behind it,
   - and make all other faults wait on the rwsem

The retry logic seems to be a lot better for forward progress.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ