[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yif+QZbCALQcYrFZ@carbon.dhcp.thefacebook.com>
Date: Tue, 8 Mar 2022 17:09:21 -0800
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Yafang Shao <laoar.shao@...il.com>
Cc: ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
kafai@...com, songliubraving@...com, yhs@...com,
john.fastabend@...il.com, kpsingh@...nel.org,
akpm@...ux-foundation.org, cl@...ux.com, penberg@...nel.org,
rientjes@...gle.com, iamjoonsoo.kim@....com, vbabka@...e.cz,
hannes@...xchg.org, mhocko@...nel.org, vdavydov.dev@...il.com,
guro@...com, linux-mm@...ck.org, netdev@...r.kernel.org,
bpf@...r.kernel.org
Subject: Re: [PATCH RFC 0/9] bpf, mm: recharge bpf memory from offline memcg
On Tue, Mar 08, 2022 at 01:10:47PM +0000, Yafang Shao wrote:
> When we use memcg to limit the containers which load bpf progs and maps,
> we find there is an issue that the lifecycle of container and bpf are not
> always the same, because we may pin the maps and progs while update the
> container only. So once the container which has alreay pinned progs and
> maps is restarted, the pinned progs and maps are no longer charged to it
> any more. In other words, this kind of container can steal memory from the
> host, that is not expected by us. This patchset means to resolve this
> issue.
>
> After the container is restarted, the old memcg which is charged by the
> pinned progs and maps will be offline but won't be freed until all of the
> related maps and progs are freed. If we want to charge these bpf memory to
> the new started memcg, we should uncharge them from the offline memcg first
> and then charge it to the new one. As we have already known how the bpf
> memroy is allocated and freed, we can also know how to charge and uncharge
> it. This pathset implements various charge and uncharge methords for these
> memory.
>
> Regarding how to do the recharge, we decide to implement new bpf syscalls
> to do it. With the new implemented bpf syscall, the agent running in the
> container can use it to do the recharge. As of now we only implement it for
> the bpf hash maps. Below is a simple example how to do the recharge,
>
> ====
> int main(int argc, char *argv[])
> {
> union bpf_attr attr = {};
> int map_id;
> int pfd;
>
> if (argc < 2) {
> printf("Pls. give a map id \n");
> exit(-1);
> }
>
> map_id = atoi(argv[1]);
> attr.map_id = map_id;
> pfd = syscall(SYS_bpf, BPF_MAP_RECHARGE, &attr, sizeof(attr));
> if (pfd < 0)
> perror("BPF_MAP_RECHARGE");
>
> return 0;
> }
>
> ====
>
> Patch #1 and #2 is for the observability, with which we can easily check
> whether the bpf maps is charged to a memcg and whether the memcg is offline.
> Patch #3, #4 and #5 is for the charge and uncharge methord for vmalloc-ed,
> kmalloc-ed and percpu memory.
> Patch #6~#9 implements the recharge of bpf hash map, which is mostly used
> by our bpf services. The other maps hasn't been implemented yet. The bpf progs
> hasn't been implemented neither.
>
> This pathset is still a POC now, with limited testing. Any feedback is
> welcomed.
Hello Yafang!
It's an interesting topic, which goes well beyond bpf. In general, on cgroup
offlining we either do nothing either recharge pages to the parent cgroup
(latter is preferred), which helps to release the pinned memcg structure.
Your approach raises some questions:
1) what if the new cgroup is not large enough to contain the bpf map?
2) does it mean that some userspace app will monitor the state of the cgroup
which was the original owner of the bpf map and recharge once it's deleted?
3) what if there are several cgroups are sharing the same map? who will be
the next owner?
4) because recharging is fully voluntary, why any application should want to do
it, if it can just use the memory for free? it doesn't really look as a working
resource control mechanism.
Will reparenting work for your case? If not, can you, please, describe the
problem you're trying to solve by recharging the memory?
Thanks!
Powered by blists - more mailing lists