[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4078bc2d-4aaf-cd1b-0145-5915e382852f@oracle.com>
Date: Thu, 24 May 2018 10:45:08 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: TSUKADA Koutaro <tsukada@...ade.co.jp>,
Michal Hocko <mhocko@...nel.org>
Cc: Johannes Weiner <hannes@...xchg.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Jonathan Corbet <corbet@....net>,
"Luis R. Rodriguez" <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <guro@...com>,
David Rientjes <rientjes@...gle.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Anshuman Khandual <khandual@...ux.vnet.ibm.com>,
Marc-Andre Lureau <marcandre.lureau@...hat.com>,
Punit Agrawal <punit.agrawal@....com>,
Dan Williams <dan.j.williams@...el.com>,
Vlastimil Babka <vbabka@...e.cz>, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, cgroups@...r.kernel.org
Subject: Re: [PATCH v2 0/7] mm: pages for hugetlb's overcommit may be able to
charge to memcg
On 05/23/2018 09:26 PM, TSUKADA Koutaro wrote:
>
> I do not know if it is really a strong use case, but I will explain my
> motive in detail. English is not my native language, so please pardon
> my poor English.
>
> I am one of the developers for software that managing the resource used
> from user job at HPC-Cluster with Linux. The resource is memory mainly.
> The HPC-Cluster may be shared by multiple people and used. Therefore, the
> memory used by each user must be strictly controlled, otherwise the
> user's job will runaway, not only will it hamper the other users, it will
> crash the entire system in OOM.
>
> Some users of HPC are very nervous about performance. Jobs are executed
> while synchronizing with MPI communication using multiple compute nodes.
> Since CPU wait time will occur when synchronizing, they want to minimize
> the variation in execution time at each node to reduce waiting times as
> much as possible. We call this variation a noise.
>
> THP does not guarantee to use the Huge Page, but may use the normal page.
Note. You do not want to use THP because "THP does not guarantee".
> This mechanism is one cause of variation(noise).
>
> The users who know this mechanism will be hesitant to use THP. However,
> the users also know the benefits of the Huge Page's TLB hit rate
> performance, and the Huge Page seems to be attractive. It seems natural
> that these users are interested in HugeTLBfs, I do not know at all
> whether it is the right approach or not.
>
> At the very least, our HPC system is pursuing high versatility and we
> have to consider whether we can provide it if users want to use HugeTLBfs.
>
> In order to use HugeTLBfs we need to create a persistent pool, but in
> our use case sharing nodes, it would be impossible to create, delete or
> resize the pool.
>
> One of the answers I have reached is to use HugeTLBfs by overcommitting
> without creating a pool(this is the surplus hugepage).
Using hugetlbfs overcommit also does not provide a guarantee. Without
doing much research, I would say the failure rate for obtaining a huge
page via THP and hugetlbfs overcommit is about the same. The most
difficult issue in both cases will be obtaining a "huge page" number of
pages from the buddy allocator.
I really do not think hugetlbfs overcommit will provide any benefit over
THP for your use case. Also, new user space code is required to "fall back"
to normal pages in the case of hugetlbfs page allocation failure. This
is not needed in the THP case.
--
Mike Kravetz
Powered by blists - more mailing lists