lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 26 Aug 2013 18:39:35 +0530 From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com> To: Joonsoo Kim <iamjoonsoo.kim@....com>, Andrew Morton <akpm@...ux-foundation.org> Cc: Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>, Michal Hocko <mhocko@...e.cz>, KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>, Hugh Dickins <hughd@...gle.com>, Davidlohr Bueso <davidlohr.bueso@...com>, David Gibson <david@...son.dropbear.id.au>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, Joonsoo Kim <js1304@...il.com>, Wanpeng Li <liwanp@...ux.vnet.ibm.com>, Naoya Horiguchi <n-horiguchi@...jp.nec.com>, Hillf Danton <dhillf@...il.com>, Joonsoo Kim <iamjoonsoo.kim@....com> Subject: Re: [PATCH v2 13/20] mm, hugetlb: mm, hugetlb: unify chg and avoid_reserve to use_reserve Joonsoo Kim <iamjoonsoo.kim@....com> writes: > Currently, we have two variable to represent whether we can use reserved > page or not, chg and avoid_reserve, respectively. With aggregating these, > we can have more clean code. This makes no functinoal difference. > > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com> > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 22ceb04..8dff972 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -531,8 +531,7 @@ static struct page *dequeue_huge_page_node(struct hstate *h, int nid) > > static struct page *dequeue_huge_page_vma(struct hstate *h, > struct vm_area_struct *vma, > - unsigned long address, int avoid_reserve, > - long chg) > + unsigned long address, bool use_reserve) > { > struct page *page = NULL; > struct mempolicy *mpol; > @@ -546,12 +545,10 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, > * A child process with MAP_PRIVATE mappings created by their parent > * have no page reserves. This check ensures that reservations are > * not "stolen". The child may still get SIGKILLed > + * Or, when parent process do COW, we cannot use reserved page. > + * In this case, ensure enough pages are in the pool. > */ > - if (chg && h->free_huge_pages - h->resv_huge_pages == 0) > - return NULL; This hunk would be much easier if you were changing. if (!vma_has_reserves(vma) && h->free_huge_pages - h->resv_huge_pages == 0) goto err; ie, !vma_has_reserves(vma) == !use_reserve. So may be a patch rearragment would help ?. But neverthless. Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@...ux.vnet.ibm.com> > - > - /* If reserves cannot be used, ensure enough pages are in the pool */ > - if (avoid_reserve && h->free_huge_pages - h->resv_huge_pages == 0) > + if (!use_reserve && h->free_huge_pages - h->resv_huge_pages == 0) > return NULL; > > retry_cpuset: > @@ -564,9 +561,7 @@ retry_cpuset: > if (cpuset_zone_allowed_softwall(zone, htlb_alloc_mask)) { > page = dequeue_huge_page_node(h, zone_to_nid(zone)); > if (page) { > - if (avoid_reserve) > - break; > - if (chg) > + if (!use_reserve) > break; > > SetPagePrivate(page); > @@ -1121,6 +1116,7 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma, > struct hstate *h = hstate_vma(vma); > struct page *page; > long chg; > + bool use_reserve; > int ret, idx; > struct hugetlb_cgroup *h_cg; > > @@ -1136,18 +1132,19 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma, > chg = vma_needs_reservation(h, vma, addr); > if (chg < 0) > return ERR_PTR(-ENOMEM); > - if (chg || avoid_reserve) > + use_reserve = (!chg && !avoid_reserve); > + if (!use_reserve) > if (hugepage_subpool_get_pages(spool, 1)) > return ERR_PTR(-ENOSPC); > > ret = hugetlb_cgroup_charge_cgroup(idx, pages_per_huge_page(h), &h_cg); > if (ret) { > - if (chg || avoid_reserve) > + if (!use_reserve) > hugepage_subpool_put_pages(spool, 1); > return ERR_PTR(-ENOSPC); > } > spin_lock(&hugetlb_lock); > - page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, chg); > + page = dequeue_huge_page_vma(h, vma, addr, use_reserve); > if (!page) { > spin_unlock(&hugetlb_lock); > page = alloc_buddy_huge_page(h, NUMA_NO_NODE); > @@ -1155,7 +1152,7 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma, > hugetlb_cgroup_uncharge_cgroup(idx, > pages_per_huge_page(h), > h_cg); > - if (chg || avoid_reserve) > + if (!use_reserve) > hugepage_subpool_put_pages(spool, 1); > return ERR_PTR(-ENOSPC); > } > -- > 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists