lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 18 Feb 2014 14:16:42 -0800 (PST)
From:	David Rientjes <rientjes@...gle.com>
To:	Marcelo Tosatti <mtosatti@...hat.com>
cc:	Luiz Capitulino <lcapitulino@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mgorman@...e.de>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Andi Kleen <andi@...stfloor.org>,
	Rik van Riel <riel@...hat.com>, davidlohr@...com,
	isimatu.yasuaki@...fujitsu.com, yinghai@...nel.org,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 4/4] hugetlb: add hugepages_node= command-line option

On Tue, 18 Feb 2014, Marcelo Tosatti wrote:

> > Lacking from your entire patchset is a specific example of what you want 
> > to do.  So I think we're all guessing what exactly your usecase is and we 
> > aren't getting any help.  Are you really suggesting that a customer wants 
> > to allocate 4 1GB hugepages on node 0, 12 2MB hugepages on node 0, 6 1GB 
> > hugepages on node 1, 24 2MB hugepages on node 1, 2 1GB hugepages on node 
> > 2, 100 2MB hugepages on node 3, etc?  Please.
> 
> Customer has 32GB machine. He wants 8 1GB pages for his performance
> critical application on node0 (KVM guest), and other guests and
> pagecache etc. using the remaining 26GB of memory.
> 

Wow, is that it?  (This still doesn't present a clear picture since we 
don't know how much memory is on node 0, though.)

So Luiz's example of setting up different size hugepages on three 
different nodes requiring nine kernel command line parameters doesn't even 
have a legitimate usecase today.

Back to the original comment on this patchset, forgetting all this 
parameter parsing stuff, if you had the ability to free 1GB pages at 
runtime then your problem is already solved, correct?  If that 32GB 
machine has two nodes, then doing "hugepagesz=1G hugepages=16" will boot 
just fine and then an init script frees the 8 1GB pages on node1.

It gets trickier if there are four nodes and each node is 8GB.  Then you'd 
be ooming the machine if you did "hugepagesz=1G hugepages=32".  You could 
actually do "hugepagesz=1G hugepages=29" and free the hugepages on 
everything except for node 0, but I feel like movablecore= would be a 
better option.

So why not just work on making 1GB pages dynamically allocatable and 
freeable at runtime?  It feels like it would be a much more heavily used 
feature than a special command line parameter for a single customer.

> > If that's actually the usecase then I'll renew my objection to the 
> > entire patchset and say you want to add the ability to dynamically 
> > allocate 1GB pages and free them at runtime early in initscripts.  If 
> > something is going to be added to init code in the kernel then it 
> > better be trivial since all this can be duplicated in userspace if you 
> > really want to be fussy about it.
> 
> Not sure what is the point here. The command line interface addition
> being proposed is simple, is it not?
> 

You can't specify an interleave behavior with Luiz's command line 
interface so now we'd have two different interfaces for allocating 
hugepage sizes depending on whether you're specifying a node or not.  
It's "hugepagesz=1G hugepages=16" vs "hugepage_node=1:16:1G" (and I'd have 
to look at previous messages in this thread to see if that means 16 1GB 
pages on node 1 or 1 1GB pages on node 16.)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ