lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Sep 2022 12:49:00 +0800
From:   Ming Lei <ming.lei@...hat.com>
To:     Ziyang Zhang <ZiyangZhang@...ux.alibaba.com>
Cc:     axboe@...nel.dk, xiaoguang.wang@...ux.alibaba.com,
        linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
        joseph.qi@...ux.alibaba.com
Subject: Re: [PATCH V3 5/7] ublk_drv: consider recovery feature in aborting
 mechanism

On Tue, Sep 20, 2022 at 12:39:31PM +0800, Ziyang Zhang wrote:
> On 2022/9/20 12:01, Ming Lei wrote:
> > On Tue, Sep 20, 2022 at 11:24:12AM +0800, Ziyang Zhang wrote:
> >> On 2022/9/20 11:04, Ming Lei wrote:
> >>> On Tue, Sep 20, 2022 at 09:49:33AM +0800, Ziyang Zhang wrote:
> >>>
> >>> Follows the delta patch against patch 5 for showing the idea:
> >>>
> >>>
> >>> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> >>> index 4409a130d0b6..60c5786c4711 100644
> >>> --- a/drivers/block/ublk_drv.c
> >>> +++ b/drivers/block/ublk_drv.c
> >>> @@ -656,7 +656,8 @@ static void ublk_complete_rq(struct request *req)
> >>>   * Also aborting may not be started yet, keep in mind that one failed
> >>>   * request may be issued by block layer again.
> >>>   */
> >>> -static void __ublk_fail_req(struct ublk_io *io, struct request *req)
> >>> +static void __ublk_fail_req(struct ublk_queue *ubq, struct ublk_io *io,
> >>> +		struct request *req)
> >>>  {
> >>>  	WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_ACTIVE);
> >>>  
> >>> @@ -667,7 +668,10 @@ static void __ublk_fail_req(struct ublk_io *io, struct request *req)
> >>>  				req->tag,
> >>>  				io->flags);
> >>>  		io->flags |= UBLK_IO_FLAG_ABORTED;
> >>> -		blk_mq_end_request(req, BLK_STS_IOERR);
> >>> +		if (ublk_queue_can_use_recovery_reissue(ubq))
> >>> +			blk_mq_requeue_request(req, false);
> >>
> >> Here is one problem:
> >> We reset io->flags to 0 in ublk_queue_reinit() and it is called before new
> > 
> > As we agreed, ublk_queue_reinit() will be moved to ublk_ch_release(), when there isn't
> > any inflight request, which is completed by either ublk server or __ublk_fail_req().
> > 
> > So clearing io->flags isn't related with quisceing device.
> > 
> >> ubq_daemon with FETCH_REQ is accepted. ublk_abort_queue() is not protected with
> >> ub_mutex and it is called many times in monitor_work. So same rq may be requeued
> >> multiple times.
> > 
> > UBLK_IO_FLAG_ABORTED is set for the slot, so one req is only ended or
> > requeued just once.
> 
> Yes, we can move ublk_queue_reinit() into ublk_ch_release(), but monitor_work is scheduled
> periodically so ublk_abort_queue() is called multiple times. As ublk_queue_reinit() clear
> io->flags, ublk_abort_queue() can requeue the same rq twice. Note that monitor_work can be
> scheduled after ublk_ch_release().

No, monitor work is supposed to be shutdown after in-flight requests are
drained.

>  
> > 
> >>
> >> With recovery disabled, there is no such problem since io->flags does not change
> >> until ublk_dev is released.
> > 
> > But we have agreed that ublk_queue_reinit() can be moved to release
> > handler of /dev/ublkcN.
> > 
> >>
> >> In my patch 5 I only requeue the same rq once. So re-using ublk_abort_queue() is
> >> hard for recovery feature.
> > 
> > No, the same rq is just requeued once. Here the point is:
> > 
> > 1) reuse previous pattern in ublk_stop_dev(), which is proved as
> > workable reliably
> > 
> > 2) avoid to stay in half-working state forever
> > 
> > 3) the behind idea is more simpler.
> 
> Ming, your patch requeue rqs with ACTVE unset. these rqs have been issued to the
> dying ubq_daemon. What I concern about is inflight rqs with ACTIVE set.

My patch drains all inflight requests no matter if ACTIVE is set or not,
and that is the reason why it is simpler.

Thanks,
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ