lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230428045149.1310073-1-tao1.su@linux.intel.com>
Date:   Fri, 28 Apr 2023 12:51:49 +0800
From:   Tao Su <tao1.su@...ux.intel.com>
To:     linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Cc:     tj@...nel.org, josef@...icpanda.com, axboe@...nel.dk,
        yukuai1@...weicloud.com, tao1.su@...ux.intel.com
Subject: [PATCH v2] block: Skip destroyed blkg when restart in blkg_destroy_all()

Kernel hang in blkg_destroy_all() when total blkg greater than
BLKG_DESTROY_BATCH_SIZE, because of not removing destroyed blkg in
blkg_list. So the size of blkg_list is same after destroying a
batch of blkg, and the infinite 'restart' occurs.

Since blkg should stay on the queue list until blkg_free_workfn(),
skip destroyed blkg when restart a new round, which will solve this
kernel hang issue and satisfy the previous will to restart.

Reported-by: Xiangfei Ma <xiangfeix.ma@...el.com>
Tested-by: Xiangfei Ma <xiangfeix.ma@...el.com>
Tested-by: Farrah Chen <farrah.chen@...el.com>
Signed-off-by: Yu Kuai <yukuai1@...weicloud.com>
Signed-off-by: Tao Su <tao1.su@...ux.intel.com>
---
v2:
- change 'directly remove destroyed blkg' to 'skip destroyed blkg'

v1:
- https://lore.kernel.org/all/20230425075911.839539-1-tao1.su@linux.intel.com/

 block/blk-cgroup.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index bd50b55bdb61..75bad5d60c9f 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -528,6 +528,9 @@ static void blkg_destroy_all(struct gendisk *disk)
 	list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) {
 		struct blkcg *blkcg = blkg->blkcg;
 
+		if (hlist_unhashed(&blkg->blkcg_node))
+			continue;
+
 		spin_lock(&blkcg->lock);
 		blkg_destroy(blkg);
 		spin_unlock(&blkcg->lock);
-- 
2.34.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ