lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANwsLLG0WuD4ZGZv_DX3AZtQMrHX1Az-aNvFY0DK6R+UxVwu8w@mail.gmail.com>
Date:   Wed, 27 Oct 2021 07:55:44 +0530
From:   Prasanna Kalever <pkalever@...hat.com>
To:     Josef Bacik <josef@...icpanda.com>, Jens Axboe <axboe@...nel.dk>,
        linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
        nbd@...er.debian.org
Cc:     Ilya Dryomov <idryomov@...hat.com>, Xiubo Li <xiubli@...hat.com>,
        Prasanna Kumar Kalever <prasanna.kalever@...hat.com>,
        Ming Lei <ming.lei@...hat.com>
Subject: Re: [PATCH v1 2/2] nbd: reset the queue/io_timeout to default on disconnect

On Thu, Sep 16, 2021 at 1:53 PM Ming Lei <ming.lei@...hat.com> wrote:
>
> On Fri, Aug 06, 2021 at 07:59:14PM +0530, pkalever@...hat.com wrote:
> > From: Prasanna Kumar Kalever <prasanna.kalever@...hat.com>
> >
> > Without any changes to NBD_ATTR_TIMEOUT (default is 30 secs),
> > $ rbd-nbd map rbd-pool/image0 --try-netlink
> > /dev/nbd0
> > $ cat /sys/block/nbd0/queue/io_timeout
> > 30000
> > $ rbd-nbd unmap /dev/nbd0
> > $ cat /sys/block/nbd0/queue/io_timeout
> > 30000
> >
> > Now user sets NBD_ATTR_TIMEOUT to 60,
> > $ rbd-nbd map rbd-pool/image0 --try-netlink --io-timeout 60
> > /dev/nbd0
> > $ cat /sys/block/nbd0/queue/io_timeout
> > 60000
> > $ rbd-nbd unmap /dev/nbd0
> > $ cat /sys/block/nbd0/queue/io_timeout
> > 60000
> >
> > Now user doesn't alter NBD_ATTR_TIMEOUT, but sysfs still shows it as 60,
> > $ rbd-nbd map rbd-pool/image0 --try-netlink
> > /dev/nbd0
> > $ cat /sys/block/nbd0/queue/io_timeout
> > 60000
> > $ rbd-nbd unmap /dev/nbd0
> > $ cat /sys/block/nbd0/queue/io_timeout
> > 60000
> >
> > The problem exists with ioctl interface too.
> >
> > Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@...hat.com>
> > ---
> >  drivers/block/nbd.c | 7 ++++++-
> >  1 file changed, 6 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
> > index 16a1a14b1fd1..a45aabc4914b 100644
> > --- a/drivers/block/nbd.c
> > +++ b/drivers/block/nbd.c
> > @@ -158,6 +158,7 @@ static void nbd_connect_reply(struct genl_info *info, int index);
> >  static int nbd_genl_status(struct sk_buff *skb, struct genl_info *info);
> >  static void nbd_dead_link_work(struct work_struct *work);
> >  static void nbd_disconnect_and_put(struct nbd_device *nbd);
> > +static void nbd_set_cmd_timeout(struct nbd_device *nbd, u64 timeout);
> >
> >  static inline struct device *nbd_to_dev(struct nbd_device *nbd)
> >  {
> > @@ -1250,7 +1251,7 @@ static void nbd_config_put(struct nbd_device *nbd)
> >                       destroy_workqueue(nbd->recv_workq);
> >               nbd->recv_workq = NULL;
> >
> > -             nbd->tag_set.timeout = 0;
> > +             nbd_set_cmd_timeout(nbd, 0);
> >               nbd->disk->queue->limits.discard_granularity = 0;
> >               nbd->disk->queue->limits.discard_alignment = 0;
> >               blk_queue_max_discard_sectors(nbd->disk->queue, UINT_MAX);
> > @@ -2124,6 +2125,10 @@ static int nbd_genl_reconfigure(struct sk_buff *skb, struct genl_info *info)
> >       if (ret)
> >               goto out;
> >
> > +     /*
> > +      * On reconfigure, if NBD_ATTR_TIMEOUT is not provided, we will
> > +      * continue to use the cmd timeout provided with connect initially.
> > +      */
> >       if (info->attrs[NBD_ATTR_TIMEOUT])
> >               nbd_set_cmd_timeout(nbd,
> >                                   nla_get_u64(info->attrs[NBD_ATTR_TIMEOUT]));
> > --
> > 2.31.1
> >
>
> Looks fine:
>
> Reviewed-by: Ming Lei <ming.lei@...hat.com>

Thanks for the review Ming.
Attempting to bring this to the top again for more reviews/acks.


Thanks!
--
Prasanna


>
> --
> Ming
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ