lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 22 Aug 2008 10:30:17 -0400
From:	jamal <hadi@...erus.ca>
To:	David Miller <davem@...emloft.net>
Cc:	jeffrey.t.kirsher@...el.com, jeff@...zik.org,
	netdev@...r.kernel.org, alexander.h.duyck@...el.com
Subject: Re: [PATCH 3/3] pkt_sched: restore multiqueue prio scheduler

On Fri, 2008-22-08 at 03:16 -0700, David Miller wrote:

> If it's just to tag traffic into different TX queues by priority,
> that's not very wise nor desirable.  What's the point?
> 
> The TX queues are useful for multiplexing traffic and seperating
> the locking and cpu overhead across execution entities in the
> system.  They can also be useful for virtualization, but that's
> not relevant in this discussion.
> 
> The TX queues, on the other hand, are not useful for exposing the
> round-robin or whatever algorithm that some cards just so happen to
> enforce fairness amongst the TX queues.  That's an implementation
> detail.
>
> The truth is, the only reason the RR prio scheduler got added was
> because Jamal and myself didn't understand very well how to use these
> multiqueue cards, or at least I didn't understand it.
> 

For the record, I was against the approach taken not the end goal. IIRC,
I was slapped around with a big fish at the time and so i got out of the
way. I still dont like it;->

There are two issues at stake:
1) egress Multiq support and the desire to have concurency based on
however many cpus and hardware queues exist on the system.
2) scheduling of the such hardware queues being executed by the hardware
(and not by software).

Daves goal: #1; run faster than Usain Bolt.
What we were solving at the time: #2. My view was to solve it with
minimal changes.

#1 and #2 are orthogonal. Yes, there is religion: Dave yours is #1.
Intels is #2; And there are a lot of people in intels camp because
they bill their customers based on qos of resources. The wire being one
such resource.

Example: if you were to use this stuff for virtualization and gave one
customer a cpu and a hardware queue, scheduling is still important. Some
customers pay less (not everyone is Steve Wozniak with his little posse
and can jump queues). 
Therefore your statement that these schemes exist to "enforce fairness
amongst the TX queues" needs to be qualified mon ami;-> The end parts of
Animal Farm come to mind: Some animals have more rights than others;->

[Lets say we forget for a minute about multiegressq nics, we still have
other hardware devices (like hardware L2/L3 switch chips) that do both
multiq and funky prioritization that need to work in the same scheme]

Back to the subject:
I think if one was to use a "qdisc-pass-through" with the what you
have implemented, theres opportunity to let hardware do its scheduling
and meet the goals the intel folks. The filters above just select the
qdisc which is set in hardware.

cheers,
jamal

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists