lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1216593170.4847.137.camel@localhost>
Date:	Sun, 20 Jul 2008 18:32:50 -0400
From:	jamal <hadi@...erus.ca>
To:	David Miller <davem@...emloft.net>
Cc:	kaber@...sh.net, netdev@...r.kernel.org, johannes@...solutions.net,
	linux-wireless@...r.kernel.org
Subject: Re: [PATCH 20/31]: pkt_sched: Perform bulk of qdisc destruction in
	RCU.

On Sun, 2008-20-07 at 10:25 -0700, David Miller wrote:

> They tend to implement round-robin or some similar fairness algorithm
> amongst the queues, with zero concern about packet priorities.

pfifo_fast would be a bad choice in that case, but even a pfifo cannot
guarantee proper RR because it would present packets in a FIFO order
(and example the first 10 could go to hardware queue1 and the next to
hardware queue2).
 
My view: i think you need a software queue per hardware queue.
Maybe even these queues residing in the driver; that way you take care
of congestion and it doesnt matter if the hardware is RR or strict prio
(and you dont need the pfifo or pfifo_fast anymore).
The use case would be something along:
packets comes in, you classify find its for queue1, grab the
per-hardware-queue1 lock, find the hardware queue1 is overloaded and
stash it instead in s/ware queue1. If it wasnt congested, it would go on
hardware queue1.
When hardware queue1 becomes available and netif-woken, you pick first
from s/ware queue1 (and batching could apply cleanly here) and send them
to hardware queue.

> It really is just like a bunch of queues to the phsyical layer,
> fairly shared.

I am suprised prioritization is not an issue. [My understanding of the
intel/cisco datacentre cabal is they serve virtual machines using
virtual wires; i would think in such scenarios youd have some customers
who pay more than others].

> These things are built for parallelization, not prioritization.

Total parallelization happens in the ideal case. If X cpus classify
packets going to X different hardware queueus each CPU grabs only locks
for that hardware queue. In virtualization, where only one customer's
traffic is going to a specific hardware queue, things would work well.
Non-virtualization scenario may result in collision in which two or more
CPUs may contend for the same hardware queue (either transmitting or
netif-waking etc).
 
cheers,
jamal

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ