lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081127120330.GM28285@wotan.suse.de>
Date:	Thu, 27 Nov 2008 13:03:30 +0100
From:	Nick Piggin <npiggin@...e.de>
To:	Török Edwin <edwintorok@...il.com>
Cc:	Mike Waychison <mikew@...gle.com>, Ying Han <yinghan@...gle.com>,
	Ingo Molnar <mingo@...e.hu>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, akpm <akpm@...ux-foundation.org>,
	David Rientjes <rientjes@...gle.com>,
	Rohit Seth <rohitseth@...gle.com>,
	Hugh Dickins <hugh@...itas.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [RFC v1][PATCH]page_fault retry with NOPAGE_RETRY

On Thu, Nov 27, 2008 at 01:39:52PM +0200, Török Edwin wrote:
> On 2008-11-27 11:28, Mike Waychison wrote:
> > Correct.  I don't recall the numbers from the pathelogical cases we
> > were seeing, but iirc, it was on the order of 10s of seconds, likely
> > exascerbated by slower than usual disks.  I've been digging through my
> > inbox to find numbers without much success -- we've been using a
> > variant of this patch since 2.6.11.
> >
> > Török however identified mmap taking on the order of several
> > milliseconds due to this exact problem:
> >
> > http://lkml.org/lkml/2008/9/12/185
> 
> 
> Hi,
> 
> Thanks for the patch. I just tested it on top of 2.6.28-rc6-tip, see
> /proc/lock_stat output at the end.
> 
> Running my testcase shows no significant performance difference. What am
> I doing wrong?
 
Software may just be doing a lot of mmap/munmap activity. threads +
mmap is never going to be pretty because it is always going to involve
broadcasting tlb flushes to other cores... Software writers shouldn't
be scared of using processes (possibly with some shared memory).
Actually, a lot of things get faster (like malloc, or file descriptor
operations) because locks aren't needed.

Despite common perception, processes are actually much *faster* than
threads when doing common operations like these. They are slightly slower
sometimes with things like creation and exit, or context switching, but
if you're doing huge numbers of those operations, then it is unlikely
to be a performance critical app... :)

(end rant; sorry, that may not have been helpful to your immediate problem,
but we need to be realistic in what complexity we are ging to add where in
the kernel in order to speed things up. And we need to steer userspace
away from problems that are fundamentally hard and not going to get easier
with trends -- like virtual address activity with multiple threads)


> ...............................................................................................................................................................................................
> 
>                          &sem->wait_lock:        122700        
> 126641           0.42          77.94      125372.37       
> 1779026        7368894           0.27        1099.42     3085559.16
>                          ---------------
>                          &sem->wait_lock           5943         
> [<ffffffff8043a768>] __up_write+0x28/0x170
>                          &sem->wait_lock           8615         
> [<ffffffff805ce3ac>] __down_write_nested+0x1c/0xc0
>                          &sem->wait_lock          13568         
> [<ffffffff8043a5a0>] __down_write_trylock+0x20/0x60
>                          &sem->wait_lock          49377         
> [<ffffffff8043a600>] __down_read_trylock+0x20/0x60
>                          ---------------
>                          &sem->wait_lock           8097         
> [<ffffffff8043a5a0>] __down_write_trylock+0x20/0x60
>                          &sem->wait_lock          31540         
> [<ffffffff8043a768>] __up_write+0x28/0x170
>                          &sem->wait_lock           5501         
> [<ffffffff805ce3ac>] __down_write_nested+0x1c/0xc0
>                          &sem->wait_lock          33342         
> [<ffffffff8043a600>] __down_read_trylock+0x20/0x60
> 

Interesting. I have some (ancient) patches to make rwsems more scalable
under heavy load by reducing contention on this lock. They should really
have been merged... Not sure how much it would help, but if you're
interested in testing, I could dust them off.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ