[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F0B7F1E.40504@gmail.com>
Date: Mon, 09 Jan 2012 18:58:22 -0500
From: KOSAKI Motohiro <kosaki.motohiro@...il.com>
To: Tao Ma <tm@....ma>
CC: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
David Rientjes <rientjes@...gle.com>,
Minchan Kim <minchan.kim@...il.com>,
Mel Gorman <mel@....ul.ie>,
Johannes Weiner <jweiner@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] mm: do not drain pagevecs for mlock
(1/6/12 1:46 AM), Tao Ma wrote:
> On 01/06/2012 02:33 PM, KOSAKI Motohiro wrote:
>> (1/6/12 1:30 AM), Tao Ma wrote:
>>> On 01/06/2012 02:18 PM, KOSAKI Motohiro wrote:
>>>> 2012/1/6 Tao Ma<tm@....ma>:
>>>>> Hi Kosaki,
>>>>> On 12/30/2011 06:07 PM, KOSAKI Motohiro wrote:
>>>>>>>> Because your test program is too artificial. 20sec/100000times =
>>>>>>>> 200usec. And your
>>>>>>>> program repeat mlock and munlock the exact same address. so, yes, if
>>>>>>>> lru_add_drain_all() is removed, it become near no-op. but it's
>>>>>>>> worthless comparision.
>>>>>>>> none of any practical program does such strange mlock usage.
>>>>>>> yes, I should say it is artificial. But mlock did cause the
>>>>>>> problem in
>>>>>>> our product system and perf shows that the mlock uses the system time
>>>>>>> much more than others. That's the reason we created this program
>>>>>>> to test
>>>>>>> whether mlock really sucks. And we compared the result with
>>>>>>> rhel5(2.6.18) which runs much much faster.
>>>>>>>
>>>>>>> And from the commit log you described, we can remove
>>>>>>> lru_add_drain_all
>>>>>>> safely here, so why add it? At least removing it makes mlock much
>>>>>>> faster
>>>>>>> compared to the vanilla kernel.
>>>>>>
>>>>>> If we remove it, we lose to a test way of mlock. "Memlocked" field of
>>>>>> /proc/meminfo
>>>>>> show inaccurate number very easily. So, if 200usec is no avoidable,
>>>>>> I'll ack you.
>>>>>> But I'm not convinced yet.
>>>>> Do you find something new for this?
>>>>
>>>> No.
>>>>
>>>> Or more exactly, 200usec is my calculation mistake. your program call
>>>> mlock
>>>> 3 times per each iteration. so, correct cost is 66usec.
>>> yes, so mlock can do 15000/s, it is even slower than the whole i/o time
>>> for some not very fast ssd disk and I don't think it is endurable. I
>>> guess we should remove it, right? Or you have another other suggestion
>>> that I can try for it?
>>
>> read whole thread.
> I have read the whole thread, and you just described that the test case
> is artificial and there is no suggestion or patch about how to resolve
> it. As I have said that it is very time-consuming and with more cpu
> cores, the more penalty, and an i/o time for a ssd can be faster than
> it. So do you think 66 usec is OK for a memory operation?
I don't think you've read the thread at all. please read akpm's commnet.
http://www.spinics.net/lists/linux-mm/msg28290.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists