lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <492E9C3C.9050507@gmail.com>
Date:	Thu, 27 Nov 2008 15:10:20 +0200
From:	Török Edwin <edwintorok@...il.com>
To:	Nick Piggin <npiggin@...e.de>
CC:	Mike Waychison <mikew@...gle.com>, Ying Han <yinghan@...gle.com>,
	Ingo Molnar <mingo@...e.hu>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, akpm <akpm@...ux-foundation.org>,
	David Rientjes <rientjes@...gle.com>,
	Rohit Seth <rohitseth@...gle.com>,
	Hugh Dickins <hugh@...itas.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [RFC v1][PATCH]page_fault retry with NOPAGE_RETRY

On 2008-11-27 15:05, Nick Piggin wrote:
> On Thu, Nov 27, 2008 at 02:52:10PM +0200, Török Edwin wrote:
>   
>> On 2008-11-27 14:39, Nick Piggin wrote:
>>     
>>> And then you also get the advantages of reduced contention on other
>>> shared locks and resources.
>>>   
>>>       
>> Thanks for the tips, but lets get back to the original question:
>> why don't I see any performance improvement with the fault-retry patches?
>>     
>
> Because as you said, your app is CPU bound and page faults aren't needing
> to sleep very much. There is too much contention on the write side, rather
> than too much contention/hold time on the read side.
>
>  
>   
>> My testcase only compares reads file with mmap, vs. reading files with
>> read, with different number of threads.
>> Leaving aside other reasons why mmap is slower, there should be some
>> speedup by running 4 threads vs 1 thread, but:
>>
>> 1 thread: read:27,18 28.76
>> 1 thread: mmap: 25.45, 25.24
>> 2 thread: read: 16.03, 15.66
>> 2 thread: mmap: 22.20, 20.99
>> 4 thread: read: 9.15, 9.12
>> 4 thread: mmap: 20.38, 20.47
>>
>> The speed of 4 threads is about the same as for 2 threads with mmap, yet
>> with read it scales nicely.
>> And the patch doesn't seem to improve scalability.
>> How can I find out if the patch works as expected? [i.e. verify that
>> faults are actually retried, and that they don't keep the semaphore locked]
>>     
>
> Yeah, that workload will be completely contended on the mmap_sem write-side
> if the files are in cache. The google patch won't help at all in that
> case.
>   

Ok. Sorry for hijacking the thread, my testcase is not a good testcase
for what this patch tries to solve.

Best regards,
--Edwin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ