[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADUfDZqqeeBTbgvCfHa8sr7Y7BetGbPzHYA1hMoN83kz+Bi54A@mail.gmail.com>
Date: Mon, 28 Apr 2025 08:12:52 -0700
From: Caleb Sander Mateos <csander@...estorage.com>
To: Ming Lei <ming.lei@...hat.com>
Cc: Jens Axboe <axboe@...nel.dk>, Uday Shankar <ushankar@...estorage.com>,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/8] ublk: factor out ublk_start_io() helper
On Mon, Apr 28, 2025 at 7:28 AM Caleb Sander Mateos
<csander@...estorage.com> wrote:
>
> On Sun, Apr 27, 2025 at 6:05 AM Ming Lei <ming.lei@...hat.com> wrote:
> >
> > On Sat, Apr 26, 2025 at 10:58:00PM -0600, Caleb Sander Mateos wrote:
> > > In preparation for calling it from outside ublk_dispatch_req(), factor
> > > out the code responsible for setting up an incoming ublk I/O request.
> > >
> > > Signed-off-by: Caleb Sander Mateos <csander@...estorage.com>
> > > ---
> > > drivers/block/ublk_drv.c | 53 ++++++++++++++++++++++------------------
> > > 1 file changed, 29 insertions(+), 24 deletions(-)
> > >
> > > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > > index 01fc92051754..90a38a82f8cc 100644
> > > --- a/drivers/block/ublk_drv.c
> > > +++ b/drivers/block/ublk_drv.c
> > > @@ -1151,17 +1151,44 @@ static inline void __ublk_abort_rq(struct ublk_queue *ubq,
> > > blk_mq_requeue_request(rq, false);
> > > else
> > > blk_mq_end_request(rq, BLK_STS_IOERR);
> > > }
> > >
> > > +static void ublk_start_io(struct ublk_queue *ubq, struct request *req,
> > > + struct ublk_io *io)
> > > +{
> > > + unsigned mapped_bytes = ublk_map_io(ubq, req, io);
> > > +
> > > + /* partially mapped, update io descriptor */
> > > + if (unlikely(mapped_bytes != blk_rq_bytes(req))) {
> > > + /*
> > > + * Nothing mapped, retry until we succeed.
> > > + *
> > > + * We may never succeed in mapping any bytes here because
> > > + * of OOM. TODO: reserve one buffer with single page pinned
> > > + * for providing forward progress guarantee.
> > > + */
> > > + if (unlikely(!mapped_bytes)) {
> > > + blk_mq_requeue_request(req, false);
> > > + blk_mq_delay_kick_requeue_list(req->q,
> > > + UBLK_REQUEUE_DELAY_MS);
> > > + return;
> > > + }
> > > +
> > > + ublk_get_iod(ubq, req->tag)->nr_sectors =
> > > + mapped_bytes >> 9;
> > > + }
> > > +
> > > + ublk_init_req_ref(ubq, req);
> > > +}
> > > +
> > > static void ublk_dispatch_req(struct ublk_queue *ubq,
> > > struct request *req,
> > > unsigned int issue_flags)
> > > {
> > > int tag = req->tag;
> > > struct ublk_io *io = &ubq->ios[tag];
> > > - unsigned int mapped_bytes;
> > >
> > > pr_devel("%s: complete: qid %d tag %d io_flags %x addr %llx\n",
> > > __func__, ubq->q_id, req->tag, io->flags,
> > > ublk_get_iod(ubq, req->tag)->addr);
> > >
> > > @@ -1204,33 +1231,11 @@ static void ublk_dispatch_req(struct ublk_queue *ubq,
> > > pr_devel("%s: update iod->addr: qid %d tag %d io_flags %x addr %llx\n",
> > > __func__, ubq->q_id, req->tag, io->flags,
> > > ublk_get_iod(ubq, req->tag)->addr);
> > > }
> > >
> > > - mapped_bytes = ublk_map_io(ubq, req, io);
> > > -
> > > - /* partially mapped, update io descriptor */
> > > - if (unlikely(mapped_bytes != blk_rq_bytes(req))) {
> > > - /*
> > > - * Nothing mapped, retry until we succeed.
> > > - *
> > > - * We may never succeed in mapping any bytes here because
> > > - * of OOM. TODO: reserve one buffer with single page pinned
> > > - * for providing forward progress guarantee.
> > > - */
> > > - if (unlikely(!mapped_bytes)) {
> > > - blk_mq_requeue_request(req, false);
> > > - blk_mq_delay_kick_requeue_list(req->q,
> > > - UBLK_REQUEUE_DELAY_MS);
> > > - return;
> > > - }
> >
> > Here it needs to break ublk_dispatch_req() for not completing the
> > uring_cmd, however ublk_start_io() can't support it.
>
> Good catch. How about I change ublk_start_io() to return a bool
> indicating whether the I/O was successfully started?
Thinking a bit more about this, is the existing behavior of returning
early from ublk_dispatch_req() correct for UBLK_IO_NEED_GET_DATA? It
makes sense for the initial ublk_dispatch_req() because the req will
be requeued without consuming the ublk fetch request, allowing it to
be reused for a subsequent I/O. But for UBLK_IO_NEED_GET_DATA, doesn't
it mean the io_uring_cmd will never complete? I would think it would
be better to return an error code in this case.
Best,
Caleb
Powered by blists - more mailing lists