lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 06 Mar 2015 14:30:37 -0800 From: Mike Kravetz <mike.kravetz@...cle.com> To: Andi Kleen <andi@...stfloor.org> CC: linux-mm@...ck.org, linux-kernel@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>, Davidlohr Bueso <dave@...olabs.net>, Aneesh Kumar <aneesh.kumar@...ux.vnet.ibm.com>, Joonsoo Kim <iamjoonsoo.kim@....com> Subject: Re: [PATCH 0/4] hugetlbfs: optionally reserve all fs pages at mount time On 03/06/2015 02:13 PM, Andi Kleen wrote: > Mike Kravetz <mike.kravetz@...cle.com> writes: > >> hugetlbfs allocates huge pages from the global pool as needed. Even if >> the global pool contains a sufficient number pages for the filesystem >> size at mount time, those global pages could be grabbed for some other >> use. As a result, filesystem huge page allocations may fail due to lack >> of pages. > > > What's the difference of this new option to simply doing > > mount -t hugetlbfs none /huge > echo XXX > /proc/sys/vm/nr_hugepages In the above sequence, it is still possible for another user/application to allocate some (or all) of the XXX huge pages. There is no guarantee that users of the filesystem will get all XXX pages. I see the use of the reserve option to be: # Make sure there are XXX huge pages in the global pool echo XXX > /proc/sys/vm/nr_hugepages # Mount/create the filesystem and reserve XXX huge pages mount -t hugetlbfs -o size=XXX,reserve=XXX none /huge If the mount is successful, then users of the filesystem know their are XXX huge pages available for their use. -- Mike Kravetz -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists