lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 24 May 2017 19:38:07 +0800
From:   Xishi Qiu <qiuxishi@...wei.com>
To:     Vlastimil Babka <vbabka@...e.cz>
CC:     Yisheng Xie <xieyisheng1@...wei.com>,
        Kefeng Wang <wangkefeng.wang@...wei.com>, <linux-mm@...ck.org>,
        <linux-kernel@...r.kernel.org>, zhongjiang <zhongjiang@...wei.com>
Subject: Re: [Question] Mlocked count will not be decreased

On 2017/5/24 18:32, Vlastimil Babka wrote:

> On 05/24/2017 10:32 AM, Yisheng Xie wrote:
>> Hi Kefengļ¼Œ
>> Could you please try this patch.
>>
>> Thanks
>> Yisheng Xie
>> -------------
>> From a70ae975756e8e97a28d49117ab25684da631689 Mon Sep 17 00:00:00 2001
>> From: Yisheng Xie <xieyisheng1@...wei.com>
>> Date: Wed, 24 May 2017 16:01:24 +0800
>> Subject: [PATCH] mlock: fix mlock count can not decrease in race condition
>>
>> Kefeng reported that when run the follow test the mlock count in meminfo
>> cannot be decreased:
>>  [1] testcase
>>  linux:~ # cat test_mlockal
>>  grep Mlocked /proc/meminfo
>>   for j in `seq 0 10`
>>   do
>>  	for i in `seq 4 15`
>>  	do
>>  		./p_mlockall >> log &
>>  	done
>>  	sleep 0.2
>>  done
>>  sleep 5 # wait some time to let mlock decrease
>>  grep Mlocked /proc/meminfo
>>
>>  linux:~ # cat p_mlockall.c
>>  #include <sys/mman.h>
>>  #include <stdlib.h>
>>  #include <stdio.h>
>>
>>  #define SPACE_LEN	4096
>>
>>  int main(int argc, char ** argv)
>>  {
>>  	int ret;
>>  	void *adr = malloc(SPACE_LEN);
>>  	if (!adr)
>>  		return -1;
>>
>>  	ret = mlockall(MCL_CURRENT | MCL_FUTURE);
>>  	printf("mlcokall ret = %d\n", ret);
>>
>>  	ret = munlockall();
>>  	printf("munlcokall ret = %d\n", ret);
>>
>>  	free(adr);
>>  	return 0;
>>  }
>>
>> When __munlock_pagevec, we ClearPageMlock but isolation_failed in race
>> condition, and we do not count these page into delta_munlocked, which cause mlock
> 
> Race condition with what? Who else would isolate our pages?
> 

Hi Vlastimil,

I find the root cause, if the page was not cached on the current cpu,
lru_add_drain() will not push it to LRU. So we should handle fail
case in mlock_vma_page().

follow_page_pte()
		...
		if (page->mapping && trylock_page(page)) {
			lru_add_drain();  /* push cached pages to LRU */
			/*
			 * Because we lock page here, and migration is
			 * blocked by the pte's page reference, and we
			 * know the page is still mapped, we don't even
			 * need to check for file-cache page truncation.
			 */
			mlock_vma_page(page);
			unlock_page(page);
		}
		...

I think we should add yisheng's patch, also we should add the following change.
I think it is better than use lru_add_drain_all().

diff --git a/mm/mlock.c b/mm/mlock.c
index 3d3ee6c..ca2aeb9 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -88,6 +88,11 @@ void mlock_vma_page(struct page *page)
 		count_vm_event(UNEVICTABLE_PGMLOCKED);
 		if (!isolate_lru_page(page))
 			putback_lru_page(page);
+		else {
+			ClearPageMlocked(page);
+			mod_zone_page_state(page_zone(page), NR_MLOCK,
+					-hpage_nr_pages(page));
+		}
 	}
 }

Thanks,
Xishi Qiu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ