lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230928125237.GA407389@cmpxchg.org>
Date:   Thu, 28 Sep 2023 08:52:37 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Roman Gushchin <roman.gushchin@...ux.dev>
Cc:     Michal Hocko <mhocko@...e.com>, Nhat Pham <nphamcs@...il.com>,
        akpm@...ux-foundation.org, riel@...riel.com, shakeelb@...gle.com,
        muchun.song@...ux.dev, tj@...nel.org, lizefan.x@...edance.com,
        shuah@...nel.org, mike.kravetz@...cle.com, yosryahmed@...gle.com,
        linux-mm@...ck.org, kernel-team@...a.com,
        linux-kernel@...r.kernel.org, cgroups@...r.kernel.org
Subject: Re: [PATCH 0/2] hugetlb memcg accounting

On Wed, Sep 27, 2023 at 02:37:47PM -0700, Roman Gushchin wrote:
> On Wed, Sep 27, 2023 at 02:47:38PM -0400, Johannes Weiner wrote:
> > On Wed, Sep 27, 2023 at 01:21:20PM +0200, Michal Hocko wrote:
> > > On Tue 26-09-23 12:49:47, Nhat Pham wrote:
> > > > Currently, hugetlb memory usage is not acounted for in the memory
> > > > controller, which could lead to memory overprotection for cgroups with
> > > > hugetlb-backed memory. This has been observed in our production system.
> > > > 
> > > > This patch series rectifies this issue by charging the memcg when the
> > > > hugetlb folio is allocated, and uncharging when the folio is freed. In
> > > > addition, a new selftest is added to demonstrate and verify this new
> > > > behavior.
> > > 
> > > The primary reason why hugetlb is living outside of memcg (and the core
> > > MM as well) is that it doesn't really fit the whole scheme. In several
> > > aspects. First and the foremost it is an independently managed resource
> > > with its own pool management, use and lifetime.
> > 
> > Honestly, the simpler explanation is that few people have used hugetlb
> > in regular, containerized non-HPC workloads.
> > 
> > Hugetlb has historically been much more special, and it retains a
> > specialness that warrants e.g. the hugetlb cgroup container. But it
> > has also made strides with hugetlb_cma, migratability, madvise support
> > etc. that allows much more on-demand use. It's no longer the case that
> > you just put a static pool of memory aside during boot and only a few
> > blessed applications are using it.
> > 
> > For example, we're using hugetlb_cma very broadly with generic
> > containers. The CMA region is fully usable by movable non-huge stuff
> > until huge pages are allocated in it. With the hugetlb controller you
> > can define a maximum number of hugetlb pages that can be used per
> > container. But what if that container isn't using any? Why shouldn't
> > it be allowed to use its overall memory allowance for anon and cache
> > instead?
> 
> Cool, I remember proposing hugetlb memcg stats several years ago and if
> I remember correctly at that time you was opposing it based on the idea
> that huge pages are not a part of the overall memcg flow: they are not
> a subject for memory pressure, can't be evicted, etc. And thp's were seen
> as a long-term replacement. Even though all above it's true, hugetlb has
> it's niche and I don't think thp's will realistically replace it any time
> soon.

Yeah, Michal's arguments very much reminded me of my stance then. I
stand corrected.

I'm still hopeful that we can make 2M work transparently. I'd expect
1G to remain in the hugetlb domain for some time to come, but even
those are mostly dynamic now with your hugetlb_cma feature!

> So I'm glad to see this effort (and very supportive) on making hugetlb
> more convenient and transparent for an end user.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ