lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231002144250.GA4414@cmpxchg.org>
Date:   Mon, 2 Oct 2023 10:42:50 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Mike Kravetz <mike.kravetz@...cle.com>
Cc:     Michal Hocko <mhocko@...e.com>, Nhat Pham <nphamcs@...il.com>,
        akpm@...ux-foundation.org, riel@...riel.com,
        roman.gushchin@...ux.dev, shakeelb@...gle.com,
        muchun.song@...ux.dev, tj@...nel.org, lizefan.x@...edance.com,
        shuah@...nel.org, yosryahmed@...gle.com, linux-mm@...ck.org,
        kernel-team@...a.com, linux-kernel@...r.kernel.org,
        cgroups@...r.kernel.org
Subject: Re: [PATCH 0/2] hugetlb memcg accounting

On Sun, Oct 01, 2023 at 04:27:30PM -0700, Mike Kravetz wrote:
> On 09/27/23 14:47, Johannes Weiner wrote:
> > On Wed, Sep 27, 2023 at 01:21:20PM +0200, Michal Hocko wrote:
> > > On Tue 26-09-23 12:49:47, Nhat Pham wrote:
> > 
> > So that if you use 80% hugetlb, the other memory is forced to stay in
> > the remaining 20%, or it OOMs; and that if you don't use hugetlb, the
> > group is still allowed to use the full 100% of its host memory
> > allowance, without requiring some outside agent continuously
> > monitoring and adjusting the container limits.
> 
> Jumping in late here as I was traveling last week.  In addition, I want
> to state my limited cgroup knowledge up front.
> 
> I was thinking of your scenario above a little differently.  Suppose a
> group is up and running at almost 100% memory usage.  However, the majority
> of that memory is reclaimable.  Now, someone wants to allocate a 2M hugetlb
> page.  There is not 2MB free, but we could easily reclaim 2MB to make room
> for the hugetlb page.  I may be missing something, but I do not see how that
> is going to happen.  It seems like we would really want that behavior.

But that is actually what it does, no?

alloc_hugetlb_folio
  mem_cgroup_hugetlb_charge_folio
    charge_memcg
      try_charge
        !page_counter_try_charge ?
          !try_to_free_mem_cgroup_pages ?
            mem_cgroup_oom

So it does reclaim when the hugetlb hits the cgroup limit. And if that
fails to make room, it OOMs the cgroup.

Or maybe I'm missing something?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ