[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQLMG5_Liz_UkG7m3mN0r9ZOwkCdAOmWgbgdgir6t+tBfg@mail.gmail.com>
Date: Wed, 26 Jun 2019 13:04:59 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Roman Gushchin <guro@...com>
Cc: Alexei Starovoitov <ast@...nel.org>, Tejun Heo <tj@...nel.org>,
bpf <bpf@...r.kernel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Kernel Team <kernel-team@...com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 bpf-next] bpf: fix cgroup bpf release synchronization
On Tue, Jun 25, 2019 at 2:39 PM Roman Gushchin <guro@...com> wrote:
>
> Since commit 4bfc0bb2c60e ("bpf: decouple the lifetime of cgroup_bpf
> from cgroup itself"), cgroup_bpf release occurs asynchronously
> (from a worker context), and before the release of the cgroup itself.
>
> This introduced a previously non-existing race between the release
> and update paths. E.g. if a leaf's cgroup_bpf is released and a new
> bpf program is attached to the one of ancestor cgroups at the same
> time. The race may result in double-free and other memory corruptions.
>
> To fix the problem, let's protect the body of cgroup_bpf_release()
> with cgroup_mutex, as it was effectively previously, when all this
> code was called from the cgroup release path with cgroup mutex held.
>
> Also let's skip cgroups, which have no chances to invoke a bpf
> program, on the update path. If the cgroup bpf refcnt reached 0,
> it means that the cgroup is offline (no attached processes), and
> there are no associated sockets left. It means there is no point
> in updating effective progs array! And it can lead to a leak,
> if it happens after the release. So, let's skip such cgroups.
>
> Big thanks for Tejun Heo for discovering and debugging of this
> problem!
>
> Fixes: 4bfc0bb2c60e ("bpf: decouple the lifetime of cgroup_bpf from
> cgroup itself")
> Reported-by: Tejun Heo <tj@...nel.org>
> Signed-off-by: Roman Gushchin <guro@...com>
Applied. Thanks
Powered by blists - more mailing lists