lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 13 Dec 2010 19:23:26 +0200
From:	Octavian Purdila <opurdila@...acom.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	netdev@...r.kernel.org,
	Lucian Adrian Grijincu <lucian.grijincu@...il.com>,
	Vlad Dogaru <ddvlad@...edu.org>
Subject: Re: [PATCH net-next-2.6] net: add dev_close_many

From: Eric Dumazet <eric.dumazet@...il.com>
Date: Monday 13 December 2010, 18:52:25

> Hmm, I think this solves the "rmmod dummy" case, but not the "dismantle
> devices one by one", which is the general one (on heavy duty tunnels/ppp
> servers)
> 
> I think we could use a kernel thread (a workqueue presumably), handling
> 3 lists of devices to be dismantled, respecting one rcu grace period (or
> rcu_barrier()) before transfert of one item from one list to following
> one.
> 
> This way, each device removal could post a device to this kernel thread
> and return to user immediately. Time of RTNL hold would be reduced
> (calls to synchronize_rcu() would be done with RTNL not held)

We also run into the case where we have to dismantle the interfaces one by one 
but we fix it by gathering the requests in userspace and then doing a 
unregister_netdevice_many operation.

I like the kernel thread / workqueue idea. But we would still need 
netdevice_unregister_many and dev_close_many right? - we put the device in the 
unregister list in unregister_netdevice and call unregister_netdevice_many in 
the kernel thread.


 
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ