[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F0A9685.6060103@tao.ma>
Date: Mon, 09 Jan 2012 15:25:57 +0800
From: Tao Ma <tm@....ma>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
CC: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
David Rientjes <rientjes@...gle.com>,
Minchan Kim <minchan.kim@...il.com>,
Mel Gorman <mel@....ul.ie>,
Johannes Weiner <jweiner@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] mm: do not drain pagevecs for mlock
Hi KOSAKI,
On 12/30/2011 06:07 PM, KOSAKI Motohiro wrote:
>>> Because your test program is too artificial. 20sec/100000times =
>>> 200usec. And your
>>> program repeat mlock and munlock the exact same address. so, yes, if
>>> lru_add_drain_all() is removed, it become near no-op. but it's
>>> worthless comparision.
>>> none of any practical program does such strange mlock usage.
>> yes, I should say it is artificial. But mlock did cause the problem in
>> our product system and perf shows that the mlock uses the system time
>> much more than others. That's the reason we created this program to test
>> whether mlock really sucks. And we compared the result with
>> rhel5(2.6.18) which runs much much faster.
>>
>> And from the commit log you described, we can remove lru_add_drain_all
>> safely here, so why add it? At least removing it makes mlock much faster
>> compared to the vanilla kernel.
>
> If we remove it, we lose to a test way of mlock. "Memlocked" field of
> /proc/meminfo
> show inaccurate number very easily. So, if 200usec is no avoidable,
> I'll ack you.
> But I'm not convinced yet.
As you don't think removing lru_add_drain_all is appropriate, I have
created another patch set to resolve it. I add a new per cpu counter to
record the counter of all the pages in the pagevecs. So if the counter
is 0, don't drain the corresponding cpu. Does it make sense to you?
Thanks
Tao
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists