lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YOSZTpTtKz2wyFO3@mtj.duckdns.org>
Date:   Tue, 6 Jul 2021 07:56:30 -1000
From:   Tejun Heo <tj@...nel.org>
To:     Yu Kuai <yukuai3@...wei.com>
Cc:     axboe@...nel.dk, cgroups@...r.kernel.org,
        linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
        yi.zhang@...wei.com
Subject: Re: [PATCH] blk-cgroup: prevent rcu_sched detected stalls warnings
 while iterating blkgs

Hello, Yu.

On Fri, Jul 02, 2021 at 12:04:44PM +0800, Yu Kuai wrote:
> blkcg_activate_policy() and blkcg_deactivate_policy() might have the
> same problem, fix them the same way.

Given that these are basically only called from module init/exit paths,
let's leave them alone for now.

> +#define BLKG_BATCH_OP_NUM 64

Can we do BLKG_DESTRY_BATCH_SIZE instead?

>  static void blkg_destroy_all(struct request_queue *q)
>  {
>  	struct blkcg_gq *blkg, *n;
> +	int count = BLKG_BATCH_OP_NUM;
>  
> +restart:
>  	spin_lock_irq(&q->queue_lock);
>  	list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) {
>  		struct blkcg *blkcg = blkg->blkcg;
> @@ -430,6 +434,17 @@ static void blkg_destroy_all(struct request_queue *q)
>  		spin_lock(&blkcg->lock);
>  		blkg_destroy(blkg);
>  		spin_unlock(&blkcg->lock);
> +
> +		/*
> +		 * in order to avoid holding the spin lock for too long, release
> +		 * it when a batch of blkgs are destroyed.
> +		 */
> +		if (!(--count)) {
> +			count = BLKG_BATCH_OP_NUM;
> +			spin_unlock_irq(&q->queue_lock);
> +			cond_resched();
> +			goto restart;
> +		}
>  	}

This part looks good otherwise.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ