lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54F80965.6010204@gmail.com>
Date:	Thu, 05 Mar 2015 01:44:37 -0600
From:	Kazutomo Yoshii <kazutomo.yoshii@...il.com>
To:	David Rientjes <rientjes@...gle.com>
CC:	Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: fix do_mbind return value

On 03/05/2015 12:53 AM, David Rientjes wrote:
> On Wed, 4 Mar 2015, Kazutomo Yoshii wrote:
>
>> I noticed that numa_alloc_onnode() failed to allocate memory on a
>> specified node in v4.0-rc1. I added a code to check the return value
>> of walk_page_range() in queue_pages_range() so that do_mbind() only
>> returns an error number or zero.
>>
> I assume this is libnuma-2.0.10?
I used libnuma-2.0.9.  Here is a strace output related to 
numa_alloc_onnode()

mmap(NULL, 4194304, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, 0, 0) = 0x7fe9b8334000
mbind(0x7fe9b8334000, 4194304, MPOL_BIND, 0x1b43bf0, 1025, 0) = 1

I believe mbind() returning a positive number is just a wrong behavior.
A tricky part is that  libnuma only checks a negative error,
so numa_alloc_onnode() itself didn't fail.   I noticed this first when I 
was checking
memory placement using the proc pagemap.

>> Signed-off-by: Kazutomo Yoshii <kazutomo.yoshii@...il.com>
>> ---
>>   mm/mempolicy.c | 6 +++++-
>>   1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>> index 4721046..ea79171 100644
>> --- a/mm/mempolicy.c
>> +++ b/mm/mempolicy.c
>> @@ -644,6 +644,7 @@ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end,
>>   		.nmask = nodes,
>>   		.prev = NULL,
>>   	};
>> +	int err;
>>   	struct mm_walk queue_pages_walk = {
>>   		.hugetlb_entry = queue_pages_hugetlb,
>>   		.pmd_entry = queue_pages_pte_range,
>> @@ -652,7 +653,10 @@ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end,
>>   		.private = &qp,
>>   	};
>>   -	return walk_page_range(start, end, &queue_pages_walk);
>> +	err = walk_page_range(start, end, &queue_pages_walk);
>> +	if (err < 0)
>> +		return err;
>> +	return 0;
>>   }
>>    /*
> I'm afraid I don't think this is the right fix, if walk_page_range()
> returns a positive value then it should be supplied by one of the
> callbacks in the struct mm_walk, which none of these happen to do.  I
> think this may be a problem with commit 6f4576e3687b ("mempolicy: apply
> page table walker on queue_pages_range()"), so let's add Naoya to the
> thread.
Thank you for the pointer!
I think queue_pages_test_walk() returns 1.
My fix may not be in a right place but someone needs to fix this.

- kaz


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ