lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yy53cgcwx+hTll4R@slm.duckdns.org>
Date:   Fri, 23 Sep 2022 17:20:18 -1000
From:   Tejun Heo <tj@...nel.org>
To:     Yafang Shao <laoar.shao@...il.com>
Cc:     ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
        kafai@...com, songliubraving@...com, yhs@...com,
        john.fastabend@...il.com, kpsingh@...nel.org, sdf@...gle.com,
        haoluo@...gle.com, jolsa@...nel.org, hannes@...xchg.org,
        mhocko@...nel.org, roman.gushchin@...ux.dev, shakeelb@...gle.com,
        songmuchun@...edance.com, akpm@...ux-foundation.org,
        lizefan.x@...edance.com, cgroups@...r.kernel.org,
        netdev@...r.kernel.org, bpf@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC PATCH bpf-next 10/10] bpf, memcg: Add new item bpf into
 memory.stat

Hello,

On Wed, Sep 21, 2022 at 05:00:02PM +0000, Yafang Shao wrote:
> A new item 'bpf' is introduced into memory.stat, then we can get the memory
> consumed by bpf. Currently only the memory of bpf-map is accounted.
> The accouting of this new item is implemented with scope-based accouting,
> which is similar to set_active_memcg(). In this scope, the memory allocated
> will be accounted or unaccounted to a specific item, which is specified by
> set_active_memcg_item().

Imma let memcg folks comment on the implementation. Hmm... I wonder how this
would tie in with the BPF memory allocator Alexei is working on.

> The result in cgroup v1 as follows,
> 	$ cat /sys/fs/cgroup/memory/foo/memory.stat | grep bpf
> 	bpf 109056000
> 	total_bpf 109056000
> After the map is removed, the counter will become zero again.
>         $ cat /sys/fs/cgroup/memory/foo/memory.stat | grep bpf
>         bpf 0
>         total_bpf 0
> 
> The 'bpf' may not be 0 after the bpf-map is destroyed, because there may be
> cached objects.

What's the difference between bpf and total_bpf? Where's total_bpf
implemented? It doesn't seem to be anywhere. Please also update
Documentation/admin-guide/cgroup-v2.rst.

> Note that there's no kmemcg in root memory cgroup, so the item 'bpf' will
> be always 0 in root memory cgroup. If a bpf-map is charged into root memcg
> directly, its memory size will not be accounted, so the 'total_bpf' can't
> be used to monitor system-wide bpf memory consumption yet.

So, system-level accounting is usually handled separately as it's most
likely that we'd want the same stat at the system level even when cgroup is
not implemented. Here, too, it'd make sense to first implement system level
bpf memory usage accounting, expose that through /proc/meminfo and then use
the same source for root level cgroup stat.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ