lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 30 Dec 2011 16:48:35 +0800
From:	Tao Ma <>
To:	KOSAKI Motohiro <>
	David Rientjes <>,
	Minchan Kim <>,
	Mel Gorman <>,
	Johannes Weiner <>,
	Andrew Morton <>
Subject: Re: [PATCH] mm: do not drain pagevecs for mlock

On 12/30/2011 04:11 PM, KOSAKI Motohiro wrote:
> 2011/12/30 Tao Ma <>:
>> In our test of mlock, we have found some severe performance regression
>> in it. Some more investigations show that mlocked is blocked heavily
>> by lur_add_drain_all which calls schedule_on_each_cpu and flush the work
>> queue which is very slower if we have several cpus.
>> So we have tried 2 ways to solve it:
>> 1. Add a per cpu counter for all the pagevecs so that we don't schedule
>>   and flush the lru_drain work if the cpu doesn't have any pagevecs(I
>>   have finished the codes already).
>> 2. Remove the lru_add_drain_all.
>> The first one has some problems since in our product system, all the cpus
>> are busy, so I guess there is very little chance for a cpu to have 0 pagevecs
>> except that you run several consecutive mlocks.
>> From the commit log which added this function(8891d6da), it seems that we
>> don't have to call it. So the 2nd one seems to be both easy and workable and
>> comes this patch.
> Could you please show us your system environment and benchmark programs?
> Usually lru_drain_** is very fast than mlock() body because it makes
> plenty memset(page).
The system environment is: 16 core Xeon E5620. 24G memory.

I have attached the program. It is very simple and just uses mlock/munlock.


View attachment "test_mlock.c" of type "text/x-csrc" (982 bytes)

Powered by blists - more mailing lists