lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bdd4104d-390e-74c7-0de1-a275044831a5@gmail.com>
Date:   Tue, 5 Apr 2022 21:58:01 +0700
From:   Bui Quang Minh <minhquangbui99@...il.com>
To:     Michal Koutný <mkoutny@...e.com>,
        Tejun Heo <tj@...nel.org>
Cc:     cgroups@...r.kernel.org, kernel test robot <lkp@...el.com>,
        Zefan Li <lizefan.x@...edance.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        John Fastabend <john.fastabend@...il.com>,
        KP Singh <kpsingh@...nel.org>, linux-kernel@...r.kernel.org,
        netdev@...r.kernel.org, bpf@...r.kernel.org
Subject: Re: [PATCH v2] cgroup: Kill the parent controller when its last child
 is killed

On 4/5/22 16:11, Michal Koutný wrote:
> On Mon, Apr 04, 2022 at 07:37:24AM -1000, Tejun Heo <tj@...nel.org> wrote:
>> And the suggested behavior doesn't make much sense to me. It doesn't
>> actually solve the underlying problem but instead always make css
>> destructions recursive which can lead to surprises for normal use cases.
> 
> I also don't like the nested special-case use percpu_ref_kill().

After thinking more carefully, I agree with your points. The recursive 
css destruction only does not fixup the previous parents' metadata 
correctly and it is not a desirable behavior too.

> I looked at this and my supposed solution turned out to be a revert of
> commit 3c606d35fe97 ("cgroup: prevent mount hang due to memory
> controller lifetime"). So at the unmount time it's necessary to distinguish
> children that are in the process of removal from children than are online or
> pinned indefinitely.
> 
> What about:
> 
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -2205,11 +2205,14 @@ static void cgroup_kill_sb(struct super_block *sb)
>          struct cgroup_root *root = cgroup_root_from_kf(kf_root);
> 
>          /*
> -        * If @root doesn't have any children, start killing it.
> +        * If @root doesn't have any children held by residual state (e.g.
> +        * memory controller), start killing it, flush workqueue to filter out
> +        * transiently offlined children.
>           * This prevents new mounts by disabling percpu_ref_tryget_live().
>           *
>           * And don't kill the default root.
>           */
> +       flush_workqueue(cgroup_destroy_wq);
>          if (list_empty(&root->cgrp.self.children) && root != &cgrp_dfl_root &&
>              !percpu_ref_is_dying(&root->cgrp.self.refcnt)) {
>                  cgroup_bpf_offline(&root->cgrp);
> 
> (I suspect there's technically still possible a race between concurrent unmount
> and the last rmdir but the flush on kill_sb path should be affordable and it
> prevents unnecessarily conserved cgroup roots.)

Your proposed solution looks good to me. As with my example the flush 
will guarantee the rmdir and its deferred work has been executed before 
cleaning up in umount path.

But what do you think about

diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index f01ff231a484..5578ee76e789 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -2215,6 +2215,7 @@ static void cgroup_kill_sb(struct super_block *sb)
                 cgroup_bpf_offline(&root->cgrp);
                 percpu_ref_kill(&root->cgrp.self.refcnt);
         }
+       root->cgrp.flags |= CGRP_UMOUNT;
         cgroup_put(&root->cgrp);
         kernfs_kill_sb(sb);
  }
@@ -5152,12 +5153,28 @@ static void css_release_work_fn(struct 
work_struct *work)
                 container_of(work, struct cgroup_subsys_state, 
destroy_work);
         struct cgroup_subsys *ss = css->ss;
         struct cgroup *cgrp = css->cgroup;
+       struct cgroup *parent = cgroup_parent(cgrp);

         mutex_lock(&cgroup_mutex);

         css->flags |= CSS_RELEASED;
         list_del_rcu(&css->sibling);

+       /*
+        * If parent doesn't have any children, start killing it.
+        * And don't kill the default root.
+        */
+       if (parent && list_empty(&parent->self.children) &&
+           parent->flags & CGRP_UMOUNT &&
+           parent != &cgrp_dfl_root.cgrp &&
+           !percpu_ref_is_dying(&parent->self.refcnt)) {
+#ifdef CONFIG_CGROUP_BPF
+               if (!percpu_ref_is_dying(&cgrp->bpf.refcnt))
+                       cgroup_bpf_offline(parent);
+#endif
+               percpu_ref_kill(&parent->self.refcnt);
+       }
+
         if (ss) {
                 /* css release path */
                 if (!list_empty(&css->rstat_css_node)) {

The idea is to set a flag in the umount path, in the rmdir it will 
destroy the css in case its direct parent is umounted, no recursive 
here. This is just an incomplete example, we may need to reset that flag 
when remounting.

Thanks,
Quang Minh.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ