lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6b2fc148-3bf9-83d5-fd5e-242ff51c9c96@kernel.dk>
Date:   Fri, 9 Jun 2023 11:41:51 -0600
From:   Jens Axboe <axboe@...nel.dk>
To:     Yu Kuai <yukuai1@...weicloud.com>, jack@...e.cz,
        andriy.shevchenko@...ux.intel.com, qiulaibin@...wei.com
Cc:     linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
        yukuai3@...wei.com, yi.zhang@...wei.com, yangerkun@...wei.com
Subject: Re: [PATCH -next] blk-mq: fix potential io hang by wrong 'wake_batch'

On 6/9/23 2:51?AM, Yu Kuai wrote:
> From: Yu Kuai <yukuai3@...wei.com>
> 
> In __blk_mq_tag_busy/idle(), updating 'active_queues' and calculating
> 'wake_batch' is not atomic:
> 
> t1:			t2:
> _blk_mq_tag_busy	blk_mq_tag_busy
> inc active_queues
> // assume 1->2
> 			inc active_queues
> 			// 2 -> 3
> 			blk_mq_update_wake_batch
> 			// calculate based on 3
> blk_mq_update_wake_batch
> /* calculate based on 2, while active_queues is actually 3. */
> 
> Fix this problem by protecting them wih 'tags->lock', this is not a hot
> path, so performance should not be concerned.
> 
> Fixes: 180dccb0dba4 ("blk-mq: fix tag_get wait task can't be awakened")
> Signed-off-by: Yu Kuai <yukuai3@...wei.com>
> ---
>  block/blk-mq-tag.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
> index dfd81cab5788..43fe523f39c7 100644
> --- a/block/blk-mq-tag.c
> +++ b/block/blk-mq-tag.c
> @@ -55,9 +55,10 @@ void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
>  			return;
>  	}
>  
> +	spin_lock_irq(&hctx->tags->lock);
>  	users = atomic_inc_return(&hctx->tags->active_queues);
> -
>  	blk_mq_update_wake_batch(hctx->tags, users);
> +	spin_unlock_irq(&hctx->tags->lock);
>  }
>  
>  /*
> @@ -90,9 +91,10 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
>  			return;
>  	}
>  
> +	spin_lock_irq(&tags->lock);
>  	users = atomic_dec_return(&tags->active_queues);
> -
>  	blk_mq_update_wake_batch(tags, users);
> +	spin_unlock_irq(&tags->lock);
>  
>  	blk_mq_tag_wakeup_all(tags, false);
>  }

>From a quick look, these are the only manipulators of active_queues.
If we're under the tags lock, why do they still need to be atomics?

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ