lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 21 Jul 2008 09:08:44 -0400
From:	jamal <hadi@...erus.ca>
To:	Herbert Xu <herbert@...dor.apana.org.au>
Cc:	davem@...emloft.net, kaber@...sh.net, netdev@...r.kernel.org,
	johannes@...solutions.net, linux-wireless@...r.kernel.org
Subject: Re: [PATCH 20/31]: pkt_sched: Perform bulk of qdisc destruction
	in	RCU.

On Mon, 2008-21-07 at 19:58 +0800, Herbert Xu wrote:

> I think I get you now.  You're suggesting that we essentially
> do what Dave has right now in the non-contending case, i.e.,
> bypassing the qdisc so we get fully parallel processing until
> one of the hardware queues seizes up.

yes. That way there is no need for an intermediate queueing. As it is
now, packets first get queued to qdisc then we dequeu and send to driver
even when the driver would be happy to take it. That approach is fine
if you want to support non-work conserving schedulers on single-hwqueue
hardware.

> At that point you'd stop all queues and make every packet go
> through the software qdisc to ensure ordering.  This continues
> until all queues have vacancies again.

I always visualize these as a single netdevice per hardware tx queue.
If i understood correctly, ordering is taken care of already in the
current patches because the  stateless filter selects a hardware-queue.

Dave has those queues sitting as a qdisc level (as pfifo) - which seems
better in retrospect (than what i was thinking that they should sit in
the driver) because one could decide they want to shape packets in the
future on a per-virtual-customer-sharing-a-virtual-wire and attach an
HTB instead.

The one thing i am unsure of still:
I think it would be cleaner to just stop a single queue (instead of all)
when one hardware queue fills up. i.e if there is no congestion on the
other hardware queues, packets should continue to be fed to their
hardware queues and not be buffered at qdisc level.

> If this is what you're suggesting, then I think that will offer
> pretty much the same behaviour as what we've got, while still
> offering at least some (perhaps even most, but that is debatable)
> of the benefits of multi-queue.
> 
> At this point I don't think this is something that we need right
> now, but it would be good to make sure that the architecture
> allows such a thing to be implemented in future.

I think it is a pretty good first start (I am a lot more optimistic to
be honest).
Parallelization would work if you can get X CPUs to send to X hardware
queues concurently. Feasible in static host setup like virtualization
environment where you can tie a vm to a cpu. Not very feasible in
routing where you are driven to a random hardware tx queue by arriving
packets.

cheers,
jamal

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ