lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 26 Aug 2013 19:14:04 +0530 From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com> To: Joonsoo Kim <iamjoonsoo.kim@....com>, Andrew Morton <akpm@...ux-foundation.org> Cc: Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>, Michal Hocko <mhocko@...e.cz>, KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>, Hugh Dickins <hughd@...gle.com>, Davidlohr Bueso <davidlohr.bueso@...com>, David Gibson <david@...son.dropbear.id.au>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, Joonsoo Kim <js1304@...il.com>, Wanpeng Li <liwanp@...ux.vnet.ibm.com>, Naoya Horiguchi <n-horiguchi@...jp.nec.com>, Hillf Danton <dhillf@...il.com>, Joonsoo Kim <iamjoonsoo.kim@....com> Subject: Re: [PATCH v2 16/20] mm, hugetlb: move down outside_reserve check Joonsoo Kim <iamjoonsoo.kim@....com> writes: > Just move down outside_reserve check and don't check > vma_need_reservation() when outside_resever is true. It is slightly > optimized implementation. > > This makes code more readable. > > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com> I guess this address the comment I had with the previous patch Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@...ux.vnet.ibm.com> > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 24de2ca..2372f75 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -2499,7 +2499,7 @@ static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, > struct page *old_page, *new_page; > int outside_reserve = 0; > long chg; > - bool use_reserve; > + bool use_reserve = false; > unsigned long mmun_start; /* For mmu_notifiers */ > unsigned long mmun_end; /* For mmu_notifiers */ > > @@ -2514,6 +2514,11 @@ retry_avoidcopy: > return 0; > } > > + page_cache_get(old_page); > + > + /* Drop page_table_lock as buddy allocator may be called */ > + spin_unlock(&mm->page_table_lock); > + > /* > * If the process that created a MAP_PRIVATE mapping is about to > * perform a COW due to a shared page count, attempt to satisfy > @@ -2527,19 +2532,17 @@ retry_avoidcopy: > old_page != pagecache_page) > outside_reserve = 1; > > - page_cache_get(old_page); > - > - /* Drop page_table_lock as buddy allocator may be called */ > - spin_unlock(&mm->page_table_lock); > - chg = vma_needs_reservation(h, vma, address); > - if (chg == -ENOMEM) { > - page_cache_release(old_page); > + if (!outside_reserve) { > + chg = vma_needs_reservation(h, vma, address); > + if (chg == -ENOMEM) { > + page_cache_release(old_page); > > - /* Caller expects lock to be held */ > - spin_lock(&mm->page_table_lock); > - return VM_FAULT_OOM; > + /* Caller expects lock to be held */ > + spin_lock(&mm->page_table_lock); > + return VM_FAULT_OOM; > + } > + use_reserve = !chg; > } > - use_reserve = !chg && !outside_reserve; > > new_page = alloc_huge_page(vma, address, use_reserve); > > -- > 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists