[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9a7600e2-044a-50ca-acde-bf647932c751@redhat.com>
Date: Fri, 2 Oct 2020 09:50:02 +0200
From: David Hildenbrand <david@...hat.com>
To: Michal Hocko <mhocko@...e.com>, Zi Yan <ziy@...dia.com>
Cc: linux-mm@...ck.org,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Roman Gushchin <guro@...com>, Rik van Riel <riel@...riel.com>,
Matthew Wilcox <willy@...radead.org>,
Shakeel Butt <shakeelb@...gle.com>,
Yang Shi <shy828301@...il.com>,
Jason Gunthorpe <jgg@...dia.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
William Kucharski <william.kucharski@...cle.com>,
Andrea Arcangeli <aarcange@...hat.com>,
John Hubbard <jhubbard@...dia.com>,
David Nellans <dnellans@...dia.com>,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 00/30] 1GB PUD THP support on x86_64
>>> - huge page sizes controllable by the userspace?
>>
>> It might be good to allow advanced users to choose the page sizes, so they
>> have better control of their applications.
>
> Could you elaborate more? Those advanced users can use hugetlb, right?
> They get a very good control over page size and pool preallocation etc.
> So they can get what they need - assuming there is enough memory.
>
I am still not convinced that 1G THP (TGP :) ) are really what we want
to support. I can understand that there are some use cases that might
benefit from it, especially:
"I want a lot of memory, give me memory in any granularity you have, I
absolutely don't care - but of course, more TGP might be good for
performance." Say, you want a 5GB region, but only have a single 1GB
hugepage lying around. hugetlbfs allocation will fail.
But then, do we really want to optimize for such (very special?) use
cases via " 58 files changed, 2396 insertions(+), 460 deletions(-)" ?
I think gigantic pages are a sparse resource. Only selected applications
*really* depend on them and benefit from them. Let these special
applications handle it explicitly.
Can we have a summary of use cases that would really benefit from this
change?
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists