lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y/TeKkhQtV7Bck8P@dhcp22.suse.cz>
Date:   Tue, 21 Feb 2023 16:07:22 +0100
From:   Michal Hocko <mhocko@...e.com>
To:     Matthew Chae <matthew.chae@...s.com>
Cc:     Johannes Weiner <hannes@...xchg.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Shakeel Butt <shakeelb@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>, kernel@...s.com,
        christopher.wong@...s.com, Muchun Song <muchun.song@...ux.dev>,
        cgroups@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/memcontrol: add memory.peak in cgroup root

On Tue 21-02-23 15:34:20, Matthew Chae wrote:
> The kernel currently doesn't provide any method to show the overall
> system's peak memory usage recorded. Instead, only each slice's peak
> memory usage recorded except for cgroup root is shown through each
> memory.peak.
> 
> Each slice might consume their peak memory at different time. This is
> stored at memory.peak in each own slice. The sum of every memory.peak
> doesn't mean the total system's peak memory usage recorded. The sum at
> certain point without having a peak memory usage in their slice can have
> the largest value.
> 
>        time |  slice1  |  slice2  |   sum
>       =======================================
>         t1  |    50    |   200    |   250
>       ---------------------------------------
>         t2  |   150    |   150    |   300
>       ---------------------------------------
>         t3  |   180    |    20    |   200
>       ---------------------------------------
>         t4  |    80    |    20    |   100
> 
> memory.peak value of slice1 is 180 and memory.peak value of slice2 is 200.
> Only these information are provided through memory.peak value from each
> slice without providing the overall system's peak memory usage. The total
> sum of these two value is 380, but this doesn't represent the real peak
> memory usage of the overall system. The peak value what we want to get is
> shown in t2 as 300, which doesn't have any biggest number even in one
> slice. Therefore the proper way to show the system's overall peak memory
> usage recorded needs to be provided.

The problem I can see is that the root's peak value doesn't really
represent the system peak memory usage because it only reflects memcg
accounted memory. So there is plenty of memory consumption which is not
covered. On top of that a lot of memory contributed to the root memcg is
not accounted at all (see try_charge and its callers) so the cumulative
hierarchical value is incomplete and I believe misleading as well.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ