[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190213133827.GN4525@dhcp22.suse.cz>
Date: Wed, 13 Feb 2019 14:38:27 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Anshuman Khandual <anshuman.khandual@....com>
Cc: "Kirill A. Shutemov" <kirill@...temov.name>,
lsf-pc@...ts.linux-foundation.org,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [LSF/MM TOPIC] Non standard size THP
On Wed 13-02-19 18:20:03, Anshuman Khandual wrote:
> On 02/12/2019 02:03 PM, Kirill A. Shutemov wrote:
> > Honestly, I'm very skeptical about the idea. It took a lot of time to
> > stabilize THP for singe page size, equal to PMD page table, but this looks
> > like a new can of worms. :P
>
> I understand your concern here but HW providing some more TLB sizes beyond
> standard page table level (PMD/PUD/PGD) based huge pages can help achieve
> performance improvement when the buddy is already fragmented enough not to
> provide higher order pages. PUD THP file mapping is already supported for
> DAX and PUD THP anon mapping might be supported in near future (it is not
> much challenging other than allocating HPAGE_PUD_SIZE huge page at runtime
> will be much difficult). Around PMD sizes like HPAGE_CONT_PMD_SIZE or
> HPAGE_CONT_PTE_SIZE really have better chances as future non-PMD level anon
> mapping than a PUD size anon mapping support in THP.
I do not think our page allocator is really ready to provide >PMD huge
pages. So even if we deal with all the nasty things wrt locking and page
table handling the crux becomes the allocation side. The current
CMA/contig allocator is everything but useful for THP. It can barely
handle hugetlb cases which are mostly pre-allocate based.
Besides that is there any real world usecase driving this or it is
merely "this is possible so let's just do it"?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists