[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7d180c47-07c8-4d33-ab64-f9bf31671b9f@huaweicloud.com>
Date: Mon, 18 Aug 2025 14:42:32 +0800
From: Chen Ridong <chenridong@...weicloud.com>
To: tj@...nel.org, hannes@...xchg.org, mkoutny@...e.com, lizefan@...wei.com
Cc: cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
lujialin4@...wei.com, chenridong@...wei.com, hdanton@...a.com,
gaoyingjie@...ontech.com
Subject: Re: [PATCH v5] cgroup: split cgroup_destroy_wq into 3 workqueues
On 2025/8/18 14:14, Chen Ridong wrote:
>
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index 312c6a8b55bb..679dc216e3ed 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -126,8 +126,22 @@ DEFINE_PERCPU_RWSEM(cgroup_threadgroup_rwsem);
> * of concurrent destructions. Use a separate workqueue so that cgroup
> * destruction work items don't end up filling up max_active of system_wq
> * which may lead to deadlock.
> + *
> + * A cgroup destruction should enqueue work sequentially to:
> + * cgroup_offline_wq: use for css offline work
> + * cgroup_release_wq: use for css release work
> + * cgroup_free_wq: use for free work
> + *
> + * Rationale for using separate workqueues:
> + * The cgroup root free work may depend on completion of other css offline
> + * operations. If all tasks were enqueued to a single workqueue, this could
> + * create a deadlock scenario where:
> + * - Free work waits for other css offline work to complete.
> + * - But other css offline work is queued after free work in the same queue.
> */
More comments are added to clarify why we split the destroy work into 3 workqueues in v5.
--
Best regards,
Ridong
Powered by blists - more mailing lists