lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B2A22C0.8080001@redhat.com>
Date:	Thu, 17 Dec 2009 07:23:28 -0500
From:	Larry Woodman <lwoodman@...hat.com>
To:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
CC:	Rik van Riel <riel@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>, akpm@...ux-foundation.org,
	linux-mm <linux-mm@...ck.org>
Subject: FWD:  [PATCH v2] vmscan: limit concurrent reclaimers in shrink_zone

KOSAKI Motohiro wrote:
> (offlist)
>
> Larry, May I ask current status of following your issue?
> I don't reproduce it. and I don't hope to keep lots patch are up in the air.
>   

Yes, sorry for the delay but I dont have direct or exclusive access to 
these large systems
and workloads.  As far as I can tell this patch series does help prevent 
total system
hangs running AIM7.  I did have trouble with the early postings mostly 
due to using sleep_on()
and wakeup() but those appear to be fixed. 

However, I did add more debug code and see ~10000 processes blocked in 
shrink_zone_begin().
This is expected but bothersome, practically all of the processes remain 
runnable for the entire
duration of these AIM runs.  Collectively all these runnable processes 
overwhelm the VM system. 
There are many more runnable processes now than were ever seen before, 
~10000 now versus
~100 on RHEL5(2.6.18 based).  So, we have also been experimenting around 
with some of the
CFS scheduler tunables to see of this is responsible... 
> plus, I integrated page_referenced() improvement patch series and
> limit concurrent reclaimers patch series privately. I plan to post it
> to lkml at this week end. comments are welcome.
>   

The only problem I noticed with the page_referenced patch was an 
increase in the
try_to_unmap() failures which causes more re-activations.  This is very 
obvious with
the using tracepoints I have posted over the past few months but they 
were never
included. I  didnt get a chance to figure out the exact cause due to 
access to the hardware
and workload.  This patch series also seems to help the overall stalls 
in the VM system.
>
> changelog from last post:
>  - remake limit concurrent reclaimers series and sort out its patch order
>  - change default max concurrent reclaimers from 8 to num_online_cpu().
>    it mean, Andi only talked negative feeling comment in last post. 
>    he dislike constant default value. plus, over num_online_cpu() is
>    really silly. iow, it is really low risk.
>    (probably we might change default value. as far as I mesure, small
>     value makes better benchmark result. but I'm not sure small value
>     don't make regression)
>  - Improve OOM and SIGKILL behavior.
>    (because RHEL5's vmscan has TIF_MEMDIE recovering logic, but
>     current mainline doesn't. I don't hope RHEL6 has regression)
>
>
>
>   
>> On Fri, 2009-12-11 at 16:46 -0500, Rik van Riel wrote:
>>
>> Rik, the latest patch appears to have a problem although I dont know
>> what the problem is yet.  When the system ran out of memory we see
>> thousands of runnable processes and 100% system time:
>>
>>
>>  9420  2  29824  79856  62676  19564    0    0     0     0 8054  379  0 
>> 100  0  0  0
>> 9420  2  29824  79368  62292  19564    0    0     0     0 8691  413  0 
>> 100  0  0  0
>> 9421  1  29824  79780  61780  19820    0    0     0     0 8928  408  0 
>> 100  0  0  0
>>
>> The system would not respond so I dont know whats going on yet.  I'll
>> add debug code to figure out why its in that state as soon as I get
>> access to the hardware.
>>     

This was in response to Rik's first patch and seems to be fixed by the 
latest path set.

Finally, having said all that, the system still struggles reclaiming 
memory with
~10000 processes trying at the same time, you fix one bottleneck and it 
moves
somewhere else.  The latest run showed all but one running process 
spinning in
page_lock_anon_vma() trying for the anon_vma_lock.  I noticed that there 
are
~5000 vma's linked to one anon_vma, this seems excessive!!!

I changed the anon_vma->lock to a rwlock_t and page_lock_anon_vma() to use
read_lock() so multiple callers could execute the page_reference_anon code.
This seems to help quite a bit.


>> Larry


View attachment "aim.patch" of type "text/x-patch" (4799 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ