[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YUFsja+cIxhFY7c0@T590>
Date: Wed, 15 Sep 2021 11:46:21 +0800
From: Ming Lei <ming.lei@...hat.com>
To: "yukuai (C)" <yukuai3@...wei.com>
Cc: axboe@...nel.dk, josef@...icpanda.com, hch@...radead.org,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
nbd@...er.debian.org, yi.zhang@...wei.com
Subject: Re: [PATCH v5 5/6] nbd: convert to use blk_mq_find_and_get_req()
On Wed, Sep 15, 2021 at 11:36:47AM +0800, yukuai (C) wrote:
> On 2021/09/15 11:16, Ming Lei wrote:
> > On Wed, Sep 15, 2021 at 09:54:09AM +0800, yukuai (C) wrote:
> > > On 2021/09/14 22:37, Ming Lei wrote:
> > > > On Tue, Sep 14, 2021 at 05:19:31PM +0800, yukuai (C) wrote:
> > > > > On 在 2021/09/14 15:46, Ming Lei wrote:
> > > > >
> > > > > > If the above can happen, blk_mq_find_and_get_req() may not fix it too, just
> > > > > > wondering why not take the following simpler way for avoiding the UAF?
> > > > > >
> > > > > > diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
> > > > > > index 5170a630778d..dfa5cce71f66 100644
> > > > > > --- a/drivers/block/nbd.c
> > > > > > +++ b/drivers/block/nbd.c
> > > > > > @@ -795,9 +795,13 @@ static void recv_work(struct work_struct *work)
> > > > > > work);
> > > > > > struct nbd_device *nbd = args->nbd;
> > > > > > struct nbd_config *config = nbd->config;
> > > > > > + struct request_queue *q = nbd->disk->queue;
> > > > > > struct nbd_cmd *cmd;
> > > > > > struct request *rq;
> > > > > > + if (!percpu_ref_tryget(&q->q_usage_counter))
> > > > > > + return;
> > > > > > +
> > > > > > while (1) {
> > > > > > cmd = nbd_read_stat(nbd, args->index);
> > > > > > if (IS_ERR(cmd)) {
> > > > > > @@ -813,6 +817,7 @@ static void recv_work(struct work_struct *work)
> > > > > > if (likely(!blk_should_fake_timeout(rq->q)))
> > > > > > blk_mq_complete_request(rq);
> > > > > > }
> > > > > > + blk_queue_exit(q);
> > > > > > nbd_config_put(nbd);
> > > > > > atomic_dec(&config->recv_threads);
> > > > > > wake_up(&config->recv_wq);
> > > > > >
> > > > >
> > > > > Hi, Ming
> > > > >
> > > > > This apporch is wrong.
> > > > >
> > > > > If blk_mq_freeze_queue() is called, and nbd is waiting for all
> > > > > request to complete. percpu_ref_tryget() will fail here, and deadlock
> > > > > will occur because request can't complete in recv_work().
> > > >
> > > > No, percpu_ref_tryget() won't fail until ->q_usage_counter is zero, when
> > > > it is perfectly fine to do nothing in recv_work().
> > > >
> > >
> > > Hi Ming
> > >
> > > This apporch is a good idea, however we should not get q_usage_counter
> > > in reccv_work(), because It will block freeze queue.
> > >
> > > How about get q_usage_counter in nbd_read_stat(), and put in error path
> > > or after request completion?
> >
> > OK, looks I missed that nbd_read_stat() needs to wait for incoming reply
> > first, so how about the following change by partitioning nbd_read_stat()
> > into nbd_read_reply() and nbd_handle_reply()?
>
> Hi, Ming
>
> The change looks good to me.
>
> Do you want to send a patch to fix this?
I guess you may add inflight check or sort of change in nbd_read_stat(), so feel
free to fold it into your series.
Thanks,
Ming
Powered by blists - more mailing lists