[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <253pm3nuojy.fsf@nvidia.com>
Date: Wed, 16 Aug 2023 16:03:45 +0300
From: Aurelien Aptel <aaptel@...dia.com>
To: Sagi Grimberg <sagi@...mberg.me>, linux-nvme@...ts.infradead.org,
netdev@...r.kernel.org, hch@....de, kbusch@...nel.org, axboe@...com,
chaitanyak@...dia.com, davem@...emloft.net, kuba@...nel.org
Cc: Or Gerlitz <ogerlitz@...dia.com>, aurelien.aptel@...il.com,
smalin@...dia.com, malin1024@...il.com, yorayz@...dia.com,
borisp@...dia.com, galshalom@...dia.com, mgurtovoy@...dia.com
Subject: Re: [PATCH v12 10/26] nvme-tcp: Deal with netdevice DOWN events
Sagi Grimberg <sagi@...mberg.me> writes:
>> + switch (event) {
>> + case NETDEV_GOING_DOWN:
>> + mutex_lock(&nvme_tcp_ctrl_mutex);
>> + list_for_each_entry(ctrl, &nvme_tcp_ctrl_list, list) {
>> + if (ndev == ctrl->offloading_netdev)
>> + nvme_tcp_error_recovery(&ctrl->ctrl);
>> + }
>> + mutex_unlock(&nvme_tcp_ctrl_mutex);
>> + flush_workqueue(nvme_reset_wq);
>
> In what context is this called? because every time we flush a workqueue,
> lockdep finds another reason to complain about something...
Thanks for highlighting this, we re-checked it and we found that we are
covered by nvme_tcp_error_recovery(), we can remove the
flush_workqueue() call above.
> Otherwise looks good,
> Reviewed-by: Sagi Grimberg <sagi@...mberg.me>
Thanks
Powered by blists - more mailing lists