lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 29 Nov 2016 07:07:34 +0000 From: Naoya Horiguchi <n-horiguchi@...jp.nec.com> To: Anshuman Khandual <khandual@...ux.vnet.ibm.com> CC: "linux-mm@...ck.org" <linux-mm@...ck.org>, "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>, Hugh Dickins <hughd@...gle.com>, "Andrew Morton" <akpm@...ux-foundation.org>, Dave Hansen <dave.hansen@...el.com>, Andrea Arcangeli <aarcange@...hat.com>, Mel Gorman <mgorman@...hsingularity.net>, Michal Hocko <mhocko@...nel.org>, "Vlastimil Babka" <vbabka@...e.cz>, Pavel Emelyanov <xemul@...allels.com>, Zi Yan <zi.yan@...rutgers.edu>, Balbir Singh <bsingharora@...il.com>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "Naoya Horiguchi" <nao.horiguchi@...il.com> Subject: Re: [PATCH v2 10/12] mm: mempolicy: mbind and migrate_pages support thp migration On Fri, Nov 25, 2016 at 05:57:20PM +0530, Anshuman Khandual wrote: > On 11/08/2016 05:01 AM, Naoya Horiguchi wrote: ... > > @@ -497,30 +541,15 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, > > struct page *page; > > struct queue_pages *qp = walk->private; > > unsigned long flags = qp->flags; > > - int nid, ret; > > + int ret; > > pte_t *pte; > > spinlock_t *ptl; > > > > - if (pmd_trans_huge(*pmd)) { > > - ptl = pmd_lock(walk->mm, pmd); > > - if (pmd_trans_huge(*pmd)) { > > - page = pmd_page(*pmd); > > - if (is_huge_zero_page(page)) { > > - spin_unlock(ptl); > > - __split_huge_pmd(vma, pmd, addr, false, NULL); > > - } else { > > - get_page(page); > > - spin_unlock(ptl); > > - lock_page(page); > > - ret = split_huge_page(page); > > - unlock_page(page); > > - put_page(page); > > - if (ret) > > - return 0; > > - } > > - } else { > > - spin_unlock(ptl); > > - } > > + ptl = pmd_trans_huge_lock(pmd, vma); > > + if (ptl) { > > + ret = queue_pages_pmd(pmd, ptl, addr, end, walk); > > + if (ret) > > + return 0; > > } > > I wonder if we should introduce pte_entry function along with pmd_entry > function as we are first looking for trans huge PMDs either for direct > addition into the migration list or splitting it before looking for PTEs. Most of pagewalk users don't define pte_entry because of performance reason (to avoid the overhead of PTRS_PER_PMD function calls). But that could be a nice cleanup if we have a workaround. Thanks, Naoya Horiguchi
Powered by blists - more mailing lists