lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 5 Nov 2015 14:58:38 +0200
From:	"Kirill A. Shutemov" <kirill@...temov.name>
To:	Vladimir Davydov <vdavydov@...tuozzo.com>
Cc:	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Hugh Dickins <hughd@...gle.com>,
	Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
	Sasha Levin <sasha.levin@...cle.com>,
	Minchan Kim <minchan@...nel.org>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org
Subject: Re: [PATCH 4/4] mm: prepare page_referenced() and page_idle to new
 THP refcounting

On Thu, Nov 05, 2015 at 03:53:54PM +0300, Vladimir Davydov wrote:
> On Thu, Nov 05, 2015 at 02:36:06PM +0200, Kirill A. Shutemov wrote:
> > On Thu, Nov 05, 2015 at 03:07:26PM +0300, Vladimir Davydov wrote:
> > > @@ -849,30 +836,23 @@ static int page_referenced_one(struct page *page, struct vm_area_struct *vma,
> > >  		if (pmd_page(*pmd) != page)
> > >  			goto unlock_pmd;
> > >  
> > > -		if (vma->vm_flags & VM_LOCKED) {
> > > -			pra->vm_flags |= VM_LOCKED;
> > > -			ret = SWAP_FAIL; /* To break the loop */
> > > -			goto unlock_pmd;
> > > -		}
> > > -
> > > -		if (pmdp_clear_flush_young_notify(vma, address, pmd))
> > > -			referenced++;
> > > -		spin_unlock(ptl);
> > > +		pte = (pte_t *)pmd;
> > 
> > pmd_t and pte_t are not always compatible. We shouldn't pretend they are.
> > And we shouldn't use pte_unmap_unlock() to unlock pmd table.
> 
> Out of curiosity, is it OK that __page_check_address can call
> pte_unmap_unlock on pte returned by huge_pte_offset, which isn't really
> pte, but pmd or pud?

hugetlb is usually implemented on architectures where you can expect some
level of compatibility between page table enties on different levels.

> > What about interface like this (I'm not sure about helper's name):
> > 
> > void page_check_address_transhuge(struct page *page, struct mm_struct *mm,
> >                                    unsigned long address,
> >                                    pmd_t **pmdp, pte_t **ptep,
> > 				   spinlock_t **ptlp);
> > 
> > page_check_address_transhuge(page, mm, address, &pmd, &pte, &ptl);
> > if (pmd) {
> > 	/* handle pmd... */
> > } else if (pte) {
> > 	/* handle pte... */
> > } else {
> > 	return SWAP_AGAIN;
> > }
> > 
> > /* common stuff */
> > 
> > if (pmd)
> > 	spin_unlock(ptl);
> > else 
> > 	pte_unmap_unlock(pte, ptl);
> 
> spin_unlock(ptl);
> if (pte)
> 	pte_unmap(pte);
> 
> would look neater IMO. Other than that, I think it's OK. At least, it
> looks better and less error-prone than duplicating such a huge chunk of
> code IMO.

Okay. Could you prepare the patch?

-- 
 Kirill A. Shutemov
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ