[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <35bc8b8c-5c31-1006-4b0c-5ad997d3ae90@redhat.com>
Date: Mon, 5 Oct 2020 19:30:22 +0200
From: David Hildenbrand <david@...hat.com>
To: Zi Yan <ziy@...dia.com>, Roman Gushchin <guro@...com>
Cc: Michal Hocko <mhocko@...e.com>, linux-mm@...ck.org,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Rik van Riel <riel@...riel.com>,
Matthew Wilcox <willy@...radead.org>,
Shakeel Butt <shakeelb@...gle.com>,
Yang Shi <shy828301@...il.com>,
Jason Gunthorpe <jgg@...dia.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
William Kucharski <william.kucharski@...cle.com>,
Andrea Arcangeli <aarcange@...hat.com>,
John Hubbard <jhubbard@...dia.com>,
David Nellans <dnellans@...dia.com>,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 00/30] 1GB PUD THP support on x86_64
>> I think gigantic pages are a sparse resource. Only selected applications
>> *really* depend on them and benefit from them. Let these special
>> applications handle it explicitly.
>>
>> Can we have a summary of use cases that would really benefit from this
>> change?
>
> For large machine learning applications, 1GB pages give good performance boost[2].
> NVIDIA DGX A100 box now has 1TB memory, which means 1GB pages are not
> that sparse in GPU-equipped infrastructure[3].
Well, they *are* sparse and there are absolutely no grantees until you
reserve them via CMA, which is just plain ugly IMHO.
In the same setup, you can most probably use hugetlbfs and achieve a
similar result. Not saying it is very user-friendly.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists