lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 18 Jul 2008 01:58:02 +0200
From:	Patrick McHardy <kaber@...sh.net>
To:	David Miller <davem@...emloft.net>
CC:	netdev@...r.kernel.org, johannes@...solutions.net,
	linux-wireless@...r.kernel.org
Subject: Re: [PATCH 20/31]: pkt_sched: Perform bulk of qdisc destruction in
 RCU.

David Miller wrote:
> From: Patrick McHardy <kaber@...sh.net>
> Date: Thu, 17 Jul 2008 15:48:22 +0200
> 
>> One thought that occured to me - we could avoid all the visiblity
>> issues wrt. dev->qdisc_list by simply getting rid of it :)
>>
>> If we move the qdisc list from the device to the root Qdisc itself,
>> it would become invisible automatically as soon as we assign a new
>> root qdisc to the netdev_queue. Iteration would become slightly
>> more complicated since we'd have to iterate over all netdev_queues,
>> but I think it should avoid most of the problems I mentioned
>> (besides the u32_list thing).
> 
> What might make sense is to have a special Qdisc_root structure which
> is simply:
> 
> struct Qdisc_root {
> 	struct Qdisc		qd;
> 	struct list_head	qdisc_list;
> };
> 
> Everything about tree level synchronization would be type explicit.

Device level grafting is also explicit, so that looks like
a clean way.

> Yes, as you say, the qdisc iteration would get slightly ugly.  But
> that doesn't seem to be a huge deal.
> 
> But it seems a clean solution to the child qdisc visibility problem.
> 
> About u32_list, that thing definitely needs some spinlock.  The
> consultation of that list, and refcount mods, only occur during config
> operations.  So it's not like we have to grab this lock in the data
> paths.
> 
> If we really want to sweep this problem under the rug, there is another
> way.  Have the qdisc_destroy() RCU handler kick off a workqueue, and
> grab the RTNL semaphore there during the final destruction calls. :-)

That would be the safe way. The RCU destruction used to cause us
bugs for at least two years, but I actually believe the Qdisc_root
thing will work :)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ