lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 May 2022 19:10:25 +0200
From:   Jan Kara <jack@...e.cz>
To:     "yukuai (C)" <yukuai3@...wei.com>
Cc:     Jan Kara <jack@...e.cz>, paolo.valente@...aro.org, axboe@...nel.dk,
        linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
        yi.zhang@...wei.com
Subject: Re: [PATCH -next 2/2] block, bfq: make bfq_has_work() more accurate

On Thu 12-05-22 09:30:16, yukuai (C) wrote:
> On 2022/05/11 22:08, Jan Kara wrote:
> > On Tue 10-05-22 21:16:29, Yu Kuai wrote:
> > > bfq_has_work() is using busy_queues currently, which is not accurate
> > > because bfq_queue is busy doesn't represent that it has requests. Since
> > > bfqd aready has a counter 'queued' to record how many requests are in
> > > bfq, use it instead of busy_queues.
> > > 
> > > Noted that bfq_has_work() can be called with 'bfqd->lock' held, thus the
> > > lock can't be held in bfq_has_work() to protect 'bfqd->queued'.
> > > 
> > > Signed-off-by: Yu Kuai <yukuai3@...wei.com>
> > 
> > So did you find this causing any real problem? Because bfq queue is
> > accounted among busy queues once bfq_add_bfqq_busy() is called. And that
> > happens once a new request is inserted into the queue so it should be very
> > similar to bfqd->queued.
> > 
> > 								Honza
> 
> Hi,
> 
> The related problem is described here:
> 
> https://lore.kernel.org/all/20220510112302.1215092-1-yukuai3@huawei.com/
> 
> The root cause of the panic is a linux-block problem, however, it can
> be bypassed if bfq_has_work() is accurate. On the other hand,
> unnecessary run_work will be triggered if bfqq stays busy:
> 
> __blk_mq_run_hw_queue
>  __blk_mq_sched_dispatch_requests
>   __blk_mq_do_dispatch_sched
>    if (!bfq_has_work())
>     break;
>    blk_mq_delay_run_hw_queues -> run again after 3ms

Ah, I see. So it is the other way around than I thought. Due to idling
bfq_tot_busy_queues() can be greater than 0 even if there are no requests
to dispatch. Indeed. OK, the patch makes sense. But please use WRITE_ONCE
for the updates of bfqd->queued. Otherwise the READ_ONCE does not really
make sense (it can still result in some bogus value due to compiler
optimizations on the write side).

								Honza

> > > ---
> > >   block/bfq-iosched.c | 4 ++--
> > >   1 file changed, 2 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> > > index 61750696e87f..1d2f8110c26b 100644
> > > --- a/block/bfq-iosched.c
> > > +++ b/block/bfq-iosched.c
> > > @@ -5063,11 +5063,11 @@ static bool bfq_has_work(struct blk_mq_hw_ctx *hctx)
> > >   	struct bfq_data *bfqd = hctx->queue->elevator->elevator_data;
> > >   	/*
> > > -	 * Avoiding lock: a race on bfqd->busy_queues should cause at
> > > +	 * Avoiding lock: a race on bfqd->queued should cause at
> > >   	 * most a call to dispatch for nothing
> > >   	 */
> > >   	return !list_empty_careful(&bfqd->dispatch) ||
> > > -		bfq_tot_busy_queues(bfqd) > 0;
> > > +		READ_ONCE(bfqd->queued);
> > >   }
> > >   static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
> > > -- 
> > > 2.31.1
> > > 
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ