[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHGf_=qhKbVCeUe+y8Hmb=ke-f417K5EYFo=j4ZODVGwewgh6A@mail.gmail.com>
Date: Fri, 6 Jan 2012 01:18:02 -0500
From: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To: Tao Ma <tm@....ma>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
David Rientjes <rientjes@...gle.com>,
Minchan Kim <minchan.kim@...il.com>,
Mel Gorman <mel@....ul.ie>,
Johannes Weiner <jweiner@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] mm: do not drain pagevecs for mlock
2012/1/6 Tao Ma <tm@....ma>:
> Hi Kosaki,
> On 12/30/2011 06:07 PM, KOSAKI Motohiro wrote:
>>>> Because your test program is too artificial. 20sec/100000times =
>>>> 200usec. And your
>>>> program repeat mlock and munlock the exact same address. so, yes, if
>>>> lru_add_drain_all() is removed, it become near no-op. but it's
>>>> worthless comparision.
>>>> none of any practical program does such strange mlock usage.
>>> yes, I should say it is artificial. But mlock did cause the problem in
>>> our product system and perf shows that the mlock uses the system time
>>> much more than others. That's the reason we created this program to test
>>> whether mlock really sucks. And we compared the result with
>>> rhel5(2.6.18) which runs much much faster.
>>>
>>> And from the commit log you described, we can remove lru_add_drain_all
>>> safely here, so why add it? At least removing it makes mlock much faster
>>> compared to the vanilla kernel.
>>
>> If we remove it, we lose to a test way of mlock. "Memlocked" field of
>> /proc/meminfo
>> show inaccurate number very easily. So, if 200usec is no avoidable,
>> I'll ack you.
>> But I'm not convinced yet.
> Do you find something new for this?
No.
Or more exactly, 200usec is my calculation mistake. your program call mlock
3 times per each iteration. so, correct cost is 66usec.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists