lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YT+cPPyDPimHibSC@slm.duckdns.org>
Date:   Mon, 13 Sep 2021 08:45:16 -1000
From:   Tejun Heo <tj@...nel.org>
To:     Waiman Long <llong@...hat.com>
Cc:     Zefan Li <lizefan.x@...edance.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Juri Lelli <juri.lelli@...hat.com>, cgroups@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] cgroup: Fix incorrect warning from
 cgroup_apply_control_disable()

On Mon, Sep 13, 2021 at 02:43:44PM -0400, Waiman Long wrote:
> > The problem with percpu_ref_is_dying() is the fact that it becomes true
> > after percpu_ref_exit() is called in css_free_rwork_fn() which has an
> > RCU delay. If you want to catch the fact that kill_css() has been
> > called, we can check the CSS_DYING flag which is set in kill_css() by
> > commit 33c35aa481786 ("cgroup: Prevent kill_css() from being called more
> > than once"). Will that be an acceptable alternative?
> 
> Something like
> 
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index 881ce1470beb..851e54800ad8 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -3140,6 +3140,9 @@ static void cgroup_apply_control_disable(struct cgroup
> *cg
>                         if (!css)
>                                 continue;
> 
> +                       if (css->flags & CSS_DYING)
> +                               continue;
> +

So, I don't think this would be correct. It is assumed that there are no
dying csses when control reaches this point. The right fix is making sure
that remount path clears up dying csses before calling into this path.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ