lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150723223651.GH24876@Sligo.logfs.org>
Date:	Thu, 23 Jul 2015 15:36:51 -0700
From:	Jörn Engel <joern@...estorage.com>
To:	David Rientjes <rientjes@...gle.com>
Cc:	Spencer Baugh <sbaugh@...ern.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
	Davidlohr Bueso <dave@...olabs.net>,
	Mike Kravetz <mike.kravetz@...cle.com>,
	Luiz Capitulino <lcapitulino@...hat.com>,
	"open list:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
	open list <linux-kernel@...r.kernel.org>,
	Spencer Baugh <Spencer.baugh@...estorage.com>,
	Joern Engel <joern@...fs.org>
Subject: Re: [PATCH] hugetlb: cond_resched for set_max_huge_pages and
 follow_hugetlb_page

On Thu, Jul 23, 2015 at 03:08:58PM -0700, David Rientjes wrote:
> On Thu, 23 Jul 2015, Spencer Baugh wrote:
> > From: Joern Engel <joern@...fs.org>
> > 
> > ~150ms scheduler latency for both observed in the wild.
> > 
> > Signed-off-by: Joern Engel <joern@...fs.org>
> > Signed-off-by: Spencer Baugh <sbaugh@...ern.com>
> > ---
> >  mm/hugetlb.c | 2 ++
> >  1 file changed, 2 insertions(+)
> > 
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index a8c3087..2eb6919 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1836,6 +1836,7 @@ static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count,
> >  			ret = alloc_fresh_gigantic_page(h, nodes_allowed);
> >  		else
> >  			ret = alloc_fresh_huge_page(h, nodes_allowed);
> > +		cond_resched();
> >  		spin_lock(&hugetlb_lock);
> >  		if (!ret)
> >  			goto out;
> 
> This is wrong, you'd want to do any cond_resched() before the page 
> allocation to avoid racing with an update to h->nr_huge_pages or 
> h->surplus_huge_pages while hugetlb_lock was dropped that would result in 
> the page having been uselessly allocated.

There are three options.  Either
	/* some allocation */
	cond_resched();
or
	cond_resched();
	/* some allocation */
or
	if (cond_resched()) {
		spin_lock(&hugetlb_lock);
		continue;
	}
	/* some allocation */

I think you want the second option instead of the first.  That way we
have a little less memory allocation for the time we are scheduled out.
Sure, we can do that.  It probably doesn't make a big difference either
way, but why not.

If you are asking for the third option, I would rather avoid that.  It
makes the code more complex and doesn't change the fact that we have a
race and better be able to handle the race.  The code size growth will
likely cost us more performance that we would ever gain.  nr_huge_pages
tends to get updated once per system boot.

> > @@ -3521,6 +3522,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> >  				spin_unlock(ptl);
> >  			ret = hugetlb_fault(mm, vma, vaddr,
> >  				(flags & FOLL_WRITE) ? FAULT_FLAG_WRITE : 0);
> > +			cond_resched();
> >  			if (!(ret & VM_FAULT_ERROR))
> >  				continue;
> >  
> 
> This is almost certainly the wrong placement as well since it's inserted 
> inside a conditional inside a while loop and there's no reason to 
> hugetlb_fault(), schedule, and then check the return value.  You need to 
> insert your cond_resched()'s in legitimate places.

I assume you want the second option here as well.  Am I right?

Jörn

--
Sometimes it pays to stay in bed on Monday, rather than spending the rest
of the week debugging Monday's code.
-- Christopher Thompson
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ