[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B2B8D2A.1020804@redhat.com>
Date: Fri, 18 Dec 2009 09:09:46 -0500
From: Rik van Riel <riel@...hat.com>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
CC: lwoodman@...hat.com, LKML <linux-kernel@...r.kernel.org>,
akpm@...ux-foundation.org, linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH v2] vmscan: limit concurrent reclaimers in shrink_zone
On 12/18/2009 05:27 AM, KOSAKI Motohiro wrote:
>> KOSAKI Motohiro wrote:
>> Finally, having said all that, the system still struggles reclaiming
>> memory with
>> ~10000 processes trying at the same time, you fix one bottleneck and it
>> moves
>> somewhere else. The latest run showed all but one running process
>> spinning in
>> page_lock_anon_vma() trying for the anon_vma_lock. I noticed that there
>> are
>> ~5000 vma's linked to one anon_vma, this seems excessive!!!
>>
>> I changed the anon_vma->lock to a rwlock_t and page_lock_anon_vma() to use
>> read_lock() so multiple callers could execute the page_reference_anon code.
>> This seems to help quite a bit.
>
> Ug. no. rw-spinlock is evil. please don't use it. rw-spinlock has bad
> performance characteristics, plenty read_lock block write_lock for very
> long time.
>
> and I would like to confirm one thing. anon_vma design didn't change
> for long year. Is this really performance regression? Do we strike
> right regression point?
In 2.6.9 and 2.6.18 the system would hit different contention
points before getting to the anon_vma lock. Now that we've
gotten the other contention points out of the way, this one
has finally been exposed.
--
All rights reversed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists