lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 24 Jul 2015 12:49:14 -0700 (PDT)
From:	David Rientjes <rientjes@...gle.com>
To:	Jörn Engel <joern@...estorage.com>
cc:	Spencer Baugh <sbaugh@...ern.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
	Davidlohr Bueso <dave@...olabs.net>,
	Mike Kravetz <mike.kravetz@...cle.com>,
	Luiz Capitulino <lcapitulino@...hat.com>,
	"open list:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
	open list <linux-kernel@...r.kernel.org>,
	Spencer Baugh <Spencer.baugh@...estorage.com>,
	Joern Engel <joern@...fs.org>
Subject: Re: [PATCH] hugetlb: cond_resched for set_max_huge_pages and
 follow_hugetlb_page

On Thu, 23 Jul 2015, Jörn Engel wrote:

> > The loop is dropping the lock simply to do the allocation and it needs to 
> > compare with the user-written number of hugepages to allocate.
> 
> And at this point the existing code is racy.  Page allocation might
> block for minutes trying to free some memory.  A cond_resched doesn't
> change that - it only increases the odds of hitting the race window.
> 

The existing code has always been racy, it explicitly admits this, the 
problem is that your patch is making the race window larger.

> Are we looking at the same code?  Mine looks like this:
> 	while (count > persistent_huge_pages(h)) {
> 		/*
> 		 * If this allocation races such that we no longer need the
> 		 * page, free_huge_page will handle it by freeing the page
> 		 * and reducing the surplus.
> 		 */
> 		spin_unlock(&hugetlb_lock);
> 		if (hstate_is_gigantic(h))
> 			ret = alloc_fresh_gigantic_page(h, nodes_allowed);
> 		else
> 			ret = alloc_fresh_huge_page(h, nodes_allowed);
> 		spin_lock(&hugetlb_lock);
> 		if (!ret)
> 			goto out;
> 
> 		/* Bail for signals. Probably ctrl-c from user */
> 		if (signal_pending(current))
> 			goto out;
> 	}
> 

I don't see the cond_resched() you propose to add, but the need for it is 
obvious with a large user-written nr_hugepages in the above loop.

The suggestion is to check the conditional, reschedule if needed (and if 
so, recheck the conditional), and then allocate.

Your third option looks fine and the best place to do the cond_resched().  
I was looking at your second option when I responded and compared it to 
the first.  We don't want to do cond_resched() immediately before or after 
the allocation, the net result is the same: we may be pointlessly 
allocating the hugepage and each hugepage allocation can be very 
heavyweight.

So I agree with your third option from the previous email.

You may also want to include the actual text of the warning from the 
kernel log in your commit message.  When people encounter this, then will 
probably grep in the kernel logs for some keywords to see if it was 
already fixed and I fear your current commit message may allow it to be 
missed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ