lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 03 May 2007 19:54:44 -0400
From:	jamal <hadi@...erus.ca>
To:	"Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@...el.com>
Cc:	Patrick McHardy <kaber@...sh.net>,
	Stephen Hemminger <shemminger@...ux-foundation.org>,
	netdev@...r.kernel.org, jgarzik@...ox.com,
	cramerj <cramerj@...el.com>,
	"Kok, Auke-jan H" <auke-jan.h.kok@...el.com>,
	"Leech, Christopher" <christopher.leech@...el.com>,
	davem@...emloft.net
Subject: RE: [PATCH] IPROUTE: Modify tc for new PRIO multiqueue behavior

On Thu, 2007-03-05 at 14:03 -0700, Waskiewicz Jr, Peter P wrote:

> Here is a paper that describes what exactly we're trying to do:
> http://www.ieee802.org/3/ar/public/0503/wadekar_1_0503.pdf.  Basically
> we need the ability to pause a queue independantly of another queue.

Ok, this is useful info Peter. 

Let me see if i got this right:
This new standard sends _flow control_ packets per 802.1p value?
Sounds a bit fscked. I am assuming that the link flow control is still
on (not that i am a big fan). And i wonder how it fits to end2end TCP
flow control etc; cant find much details on google. It almost smells
like credit/rate based ATM flow control in disguise. Almost like these
folks think in terms of ATM VCs..
Is Datacenter ethernet the name of the standard or just a marketing
term?
I suspect that vendors have not yet started deploying this technology?
Is there a switch out there that supports the feature? In your view
is this technology going to be more prelevant or just a half-ass
marketing adventure?

> Because of this requirement, the kernel needs visibility into the driver
> and to have knowledge of and provide control of each queue.  Please note
> that the API I'm proposing is a generic representation of the Datacenter
> Ethernet mentioned in the paper; I figured if we're putting in an
> interface to support it, it should be generic so other technologies out
> there could easily use it.

This certainly adds a new twist to the whole thing. I agree that we need
to support the feature and i see more room for a consensus now (which
was missing before). I need to think some more.

> Hopefully that paper can help people understand the motivation why I've
> done things the way they are in the patches.  Given this information,
> I'd really like to solicit feedback on the patches as they stand (both
> approach and implementation).

Like i said this is very useful detail to know.
Give me sometime to get back to you. I need to mull over it.
My strong view is still:
a)  the changes to be totaly transparent to the user.
b)  to have any new qdiscs (WRR for example) for multi-ring hardware to
benefit single-ring hardware
c)  no changes to the core; i can see perhaps a new call to the
qdisc to provide +/- credit but i need to think some more about it ..

If you can achieve those goals, we can go a long way ...

cheers,
jamal

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists