lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 15 Jun 2015 11:42:25 -0700
From:	Mike Kravetz <mike.kravetz@...cle.com>
To:	Naoya Horiguchi <n-horiguchi@...jp.nec.com>
CC:	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	David Rientjes <rientjes@...gle.com>,
	Hugh Dickins <hughd@...gle.com>,
	Davidlohr Bueso <dave@...olabs.net>,
	Aneesh Kumar <aneesh.kumar@...ux.vnet.ibm.com>,
	Hillf Danton <hillf.zj@...baba-inc.com>,
	Christoph Hellwig <hch@...radead.org>
Subject: Re: [RFC v4 PATCH 6/9] mm/hugetlb: alloc_huge_page handle areas hole
 punched by fallocate

On 06/14/2015 11:34 PM, Naoya Horiguchi wrote:
> On Thu, Jun 11, 2015 at 02:01:37PM -0700, Mike Kravetz wrote:
>> Areas hole punched by fallocate will not have entries in the
>> region/reserve map.  However, shared mappings with min_size subpool
>> reservations may still have reserved pages.  alloc_huge_page needs
>> to handle this special case and do the proper accounting.
>>
>> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
>> ---
>>   mm/hugetlb.c | 48 +++++++++++++++++++++++++++---------------------
>>   1 file changed, 27 insertions(+), 21 deletions(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index ecbaffe..9c295c9 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -692,19 +692,9 @@ static int vma_has_reserves(struct vm_area_struct *vma, long chg)
>>   			return 0;
>>   	}
>>   
>> -	if (vma->vm_flags & VM_MAYSHARE) {
>> -		/*
>> -		 * We know VM_NORESERVE is not set.  Therefore, there SHOULD
>> -		 * be a region map for all pages.  The only situation where
>> -		 * there is no region map is if a hole was punched via
>> -		 * fallocate.  In this case, there really are no reverves to
>> -		 * use.  This situation is indicated if chg != 0.
>> -		 */
>> -		if (chg)
>> -			return 0;
>> -		else
>> -			return 1;
>> -	}
>> +	/* Shared mappings always use reserves */
>> +	if (vma->vm_flags & VM_MAYSHARE)
>> +		return 1;
> 
> This change completely reverts 5/9, so can you omit 5/9?

That was a mistake.  This change should not be in the patch.  The
change from 5/9 needs to remain.  Sorry for confusion.  Thanks for
catching.

>>   	/*
>>   	 * Only the process that called mmap() has reserves for
>> @@ -1601,6 +1591,7 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
>>   	struct hstate *h = hstate_vma(vma);
>>   	struct page *page;
>>   	long chg, commit;
>> +	long gbl_chg;
>>   	int ret, idx;
>>   	struct hugetlb_cgroup *h_cg;
>>   
>> @@ -1608,24 +1599,39 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
>>   	/*
>>   	 * Processes that did not create the mapping will have no
>>   	 * reserves and will not have accounted against subpool
>> -	 * limit. Check that the subpool limit can be made before
>> -	 * satisfying the allocation MAP_NORESERVE mappings may also
>> -	 * need pages and subpool limit allocated allocated if no reserve
>> -	 * mapping overlaps.
>> +	 * limit. Check that the subpool limit will not be exceeded
>> +	 * before performing the allocation.  Allocations for
>> +	 * MAP_NORESERVE mappings also need to be checked against
>> +	 * any subpool limit.
>> +	 *
>> +	 * NOTE: Shared mappings with holes punched via fallocate
>> +	 * may still have reservations, even without entries in the
>> +	 * reserve map as indicated by vma_needs_reservation.  This
>> +	 * would be the case if hugepage_subpool_get_pages returns
>> +	 * zero to indicate no changes to the global reservation count
>> +	 * are necessary.  In this case, pass the output of
>> +	 * hugepage_subpool_get_pages (zero) to dequeue_huge_page_vma
>> +	 * so that the page is not counted against the global limit.
>> +	 * For MAP_NORESERVE mappings always pass the output of
>> +	 * vma_needs_reservation.  For race detection and error cleanup
>> +	 * use output of vma_needs_reservation as well.
>>   	 */
>> -	chg = vma_needs_reservation(h, vma, addr);
>> +	chg = gbl_chg = vma_needs_reservation(h, vma, addr);
>>   	if (chg < 0)
>>   		return ERR_PTR(-ENOMEM);
>> -	if (chg || avoid_reserve)
>> -		if (hugepage_subpool_get_pages(spool, 1) < 0)
>> +	if (chg || avoid_reserve) {
>> +		gbl_chg = hugepage_subpool_get_pages(spool, 1);
>> +		if (gbl_chg < 0)
>>   			return ERR_PTR(-ENOSPC);
>> +	}
>>   
>>   	ret = hugetlb_cgroup_charge_cgroup(idx, pages_per_huge_page(h), &h_cg);
>>   	if (ret)
>>   		goto out_subpool_put;
>>   
>>   	spin_lock(&hugetlb_lock);
>> -	page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, chg);
>> +	page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve,
>> +					avoid_reserve ? chg : gbl_chg);
> 
> You use chg or gbl_chg depending on avoid_reserve here, and below this line
> there's code like below
> 
> 	commit = vma_commit_reservation(h, vma, addr);
> 	if (unlikely(chg > commit)) {
> 		...
> 	}
> 
> This also need to be changed to use chg or gbl_chg depending on avoid_reserve?

It should use chg only.  I attempted to address this at the end of the
Note above.
" For race detection and error cleanup use output of vma_needs_reservation
  as well."
I will add more comments to make it clear.

> # I feel that this reserve-handling code in alloc_huge_page() is too complicated
> # and hard to understand, so some cleanup like separating reserve parts into
> # other new routine(s) might be helpful...

I agree, let me think about ways to split this up and hopefully make
it easier to understand.

-- 
Mike Kravetz

> 
> Thanks,
> Naoya Horiguchi
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ