lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 11 Oct 2020 09:54:37 +0300
From:   Or Gerlitz <gerlitz.or@...il.com>
To:     Sagi Grimberg <sagi@...mberg.me>
Cc:     Boris Pismenny <borisp@...lanox.com>,
        Jakub Kicinski <kuba@...nel.org>,
        David Miller <davem@...emloft.net>,
        Saeed Mahameed <saeedm@...dia.com>,
        Christoph Hellwig <hch@....de>, axboe@...com,
        kbusch@...nel.org, Alexander Viro <viro@...iv.linux.org.uk>,
        Eric Dumazet <edumazet@...gle.com>,
        Yoray Zack <yorayz@...lanox.com>,
        Ben Ben-Ishay <benishay@...lanox.com>,
        boris.pismenny@...il.com, linux-nvme@...ts.infradead.org,
        Linux Netdev List <netdev@...r.kernel.org>,
        Or Gerlitz <ogerlitz@...lanox.com>
Subject: Re: [PATCH net-next RFC v1 08/10] nvme-tcp: Deal with netdevice DOWN events

On Fri, Oct 9, 2020 at 1:50 AM Sagi Grimberg <sagi@...mberg.me> wrote:

> > @@ -412,8 +413,6 @@ int nvme_tcp_offload_limits(struct nvme_tcp_queue *queue,
> >               queue->ctrl->ctrl.max_segments = limits->max_ddp_sgl_len;
> >               queue->ctrl->ctrl.max_hw_sectors =
> >                       limits->max_ddp_sgl_len << (ilog2(SZ_4K) - 9);
> > -     } else {
> > -             queue->ctrl->offloading_netdev = NULL;
>
> Squash this change to the patch that introduced it.

OK, will look on that and I guess it should be fine to make this as
you suggested


> > +     case NETDEV_GOING_DOWN:
> > +             mutex_lock(&nvme_tcp_ctrl_mutex);
> > +             list_for_each_entry(ctrl, &nvme_tcp_ctrl_list, list) {
> > +                     if (ndev != ctrl->offloading_netdev)
> > +                             continue;
> > +                     nvme_tcp_error_recovery(&ctrl->ctrl);
> > +             }
> > +             mutex_unlock(&nvme_tcp_ctrl_mutex);
> > +             flush_workqueue(nvme_reset_wq);
>
> Worth a small comment that this we want the err_work to complete
> here. So if someone changes workqueues he may see this.


ack

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ