lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 20 Oct 2015 15:19:41 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Dave Hansen <dave@...1.net>
Cc:	n-horiguchi@...jp.nec.com, mike.kravetz@...cle.com,
	hillf.zj@...baba-inc.com, rientjes@...gle.com, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, dave.hansen@...ux.intel.com
Subject: Re: [PATCH] mm, hugetlb: use memory policy when available

On Tue, 20 Oct 2015 12:53:17 -0700 Dave Hansen <dave@...1.net> wrote:

> 
> From: Dave Hansen <dave.hansen@...ux.intel.com>
> 
> I have a hugetlbfs user which is never explicitly allocating huge pages
> with 'nr_hugepages'.  They only set 'nr_overcommit_hugepages' and then let
> the pages be allocated from the buddy allocator at fault time.
> 
> This works, but they noticed that mbind() was not doing them any good and
> the pages were being allocated without respect for the policy they
> specified.
> 
> The code in question is this:
> 
> > struct page *alloc_huge_page(struct vm_area_struct *vma,
> ...
> >         page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg);
> >         if (!page) {
> >                 page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
> 
> dequeue_huge_page_vma() is smart and will respect the VMA's memory policy.
> But, it only grabs _existing_ huge pages from the huge page pool.  If the
> pool is empty, we fall back to alloc_buddy_huge_page() which obviously
> can't do anything with the VMA's policy because it isn't even passed the
> VMA.
> 
> Almost everybody preallocates huge pages.  That's probably why nobody has
> ever noticed this.  Looking back at the git history, I don't think this
> _ever_ worked from when alloc_buddy_huge_page() was introduced in 7893d1d5,
> 8 years ago.
> 
> The fix is to pass vma/addr down in to the places where we actually call in
> to the buddy allocator.  It's fairly straightforward plumbing.  This has
> been lightly tested.

huh.  Fair enough.

>  b/mm/hugetlb.c |  111 ++++++++++++++++++++++++++++++++++++++++++++++++++-------

Is it worth deporking this for the CONFIG_NUMA=n case?


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ