lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 23 Jul 2015 15:54:43 -0700 (PDT)
From:	David Rientjes <rientjes@...gle.com>
To:	Jörn Engel <joern@...estorage.com>
cc:	Spencer Baugh <sbaugh@...ern.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
	Davidlohr Bueso <dave@...olabs.net>,
	Mike Kravetz <mike.kravetz@...cle.com>,
	Luiz Capitulino <lcapitulino@...hat.com>,
	"open list:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
	open list <linux-kernel@...r.kernel.org>,
	Spencer Baugh <Spencer.baugh@...estorage.com>,
	Joern Engel <joern@...fs.org>
Subject: Re: [PATCH] hugetlb: cond_resched for set_max_huge_pages and
 follow_hugetlb_page

On Thu, 23 Jul 2015, Jörn Engel wrote:

> > This is wrong, you'd want to do any cond_resched() before the page 
> > allocation to avoid racing with an update to h->nr_huge_pages or 
> > h->surplus_huge_pages while hugetlb_lock was dropped that would result in 
> > the page having been uselessly allocated.
> 
> There are three options.  Either
> 	/* some allocation */
> 	cond_resched();
> or
> 	cond_resched();
> 	/* some allocation */
> or
> 	if (cond_resched()) {
> 		spin_lock(&hugetlb_lock);
> 		continue;
> 	}
> 	/* some allocation */
> 
> I think you want the second option instead of the first.  That way we
> have a little less memory allocation for the time we are scheduled out.
> Sure, we can do that.  It probably doesn't make a big difference either
> way, but why not.
> 

The loop is dropping the lock simply to do the allocation and it needs to 
compare with the user-written number of hugepages to allocate.

What we don't want is to allocate, reschedule, and check if we really 
needed to allocate.  That's what your patch does because it races with 
persistent_huge_page().  It's probably the worst place to do it.

Rather, what you want to do is check if you need to allocate, reschedule 
if needed (and if so, recheck), and then allocate.

> If you are asking for the third option, I would rather avoid that.  It
> makes the code more complex and doesn't change the fact that we have a
> race and better be able to handle the race.  The code size growth will
> likely cost us more performance that we would ever gain.  nr_huge_pages
> tends to get updated once per system boot.
> 

Your third option is nonsensical, you didn't save the state of whether you 
locked the lock so you can't reliably unlock it, and you cannot hold a 
spinlock while allocating in this context.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ