lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20130716.112820.260966128083756999.davem@davemloft.net>
Date:	Tue, 16 Jul 2013 11:28:20 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	pshelar@...ira.com
Cc:	stephen@...workplumber.org, netdev@...r.kernel.org
Subject: Re: [PATCH net] vxlan: add necessary locking on device removal

From: Pravin Shelar <pshelar@...ira.com>
Date: Sat, 13 Jul 2013 12:21:52 -0700

> On Sat, Jul 13, 2013 at 10:18 AM, Stephen Hemminger
> <stephen@...workplumber.org> wrote:
>> The socket management is now done in workqueue (outside of RTNL)
>> and protected by vn->sock_lock. There were two possible bugs, first
>> the vxlan device was removed from the VNI hash table per socket without
>> holding lock. And there was a race when device is created and the workqueue
>> could run after deletion.
>>
>> Signed-off-by: Stephen Hemminger <stephen@...workplumber.org>
>>
>> --- a/drivers/net/vxlan.c       2013-07-08 16:31:50.080744429 -0700
>> +++ b/drivers/net/vxlan.c       2013-07-10 20:15:47.337653899 -0700
>> @@ -1767,9 +1767,15 @@ static int vxlan_newlink(struct net *net
>>
>>  static void vxlan_dellink(struct net_device *dev, struct list_head *head)
>>  {
>> +       struct vxlan_net *vn = net_generic(dev_net(dev), vxlan_net_id);
>>         struct vxlan_dev *vxlan = netdev_priv(dev);
>>
>> +       flush_workqueue(vxlan_wq);
>> +
> Doesn't this create dependency on sock_work thread while holding RTNL.
> If so it can result in deadlock.

What exact deadlock do you perceive?  I don't see any code path in the
sock_work handler (vxlan_sock_work) which takes the RTNL mutex.

So we should be able to safely flush any pending sock_work jobs from
vxlan_dellink().  The fact that vxlan_dellink() runs with the RTNL
mutex shouldn't cause any issues.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ