lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <D5C1322C3E673F459512FB59E0DDC32902D10DA3@orsmsx414.amr.corp.intel.com>
Date:	Thu, 10 May 2007 11:22:17 -0700
From:	"Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@...el.com>
To:	<hadi@...erus.ca>
Cc:	"Johannes Berg" <johannes@...solutions.net>,
	"Zhu, Yi" <yi.zhu@...el.com>,
	"Stephen Hemminger" <shemminger@...ux-foundation.org>,
	"Patrick McHardy" <kaber@...sh.net>, <netdev@...r.kernel.org>,
	<jgarzik@...ox.com>, "cramerj" <cramerj@...el.com>,
	"Kok, Auke-jan H" <auke-jan.h.kok@...el.com>,
	"Leech, Christopher" <christopher.leech@...el.com>,
	<davem@...emloft.net>
Subject: RE: [PATCH] IPROUTE: Modify tc for new PRIO multiqueue behavior

> Wireless offers a strict priority scheduler with statistical 
> transmit (as opposed to deterministic offered by the linux 
> strict prio qdisc); so wireless is not in the same boat as DCE.

Again, you're comparing these patches with DCE, which is not the intent.
It's something I presented that can use these patches, not as a
justification for them.

> Once you run the ATA over ethernet with your approach, please 
> repeat the test with a single ring in hardware and an 
> equivalent qdisc in linux.
> I dont believe you will see any difference - Linux is that good.
> This is not to say i am against your patches, I am just for 
> optimizing for the common.

I ran some tests on a 1gig network (isolated) using 2 hardware queues,
streaming video on one and having everything else on the other queue.
After the buffered video is sent and the request for more video is made,
I see a slowdown with a single queue.  I see a difference using these
patches to mitigate the impact to the different flows; Linux may be good
at scheduling, but that doesn't help when hardware is being pushed to
its limit - this was running full line-rate constantly (uncompressed mpg
for video and standard iperf settings for LAN traffic).

I almost ran some tests where I resize the Tx rings to give more buffer
for the streaming video (or ATA over Ethernet, in my previous example),
and less for the LAN traffic.  I can see people who want to ensure more
resources for latency-sensitive traffic doing this, and it would
certainly show a more significant impact without the queue visibility in
the kernel.  I did not run these tests though, since unmodified ring
sizes showed that with my patches, I have less impact to my more
demanding flow than with a single ring and the same qdisc.  I suggest
you actually try it and see.

So I have run these tests at 1gig with a 2-core and 4-core system.  I'd
argue this is optimizing for the common, since I used streaming video in
my test, whereas someone else can use ATA over Ethernet, ndb, or VoIP,
and still benefit this way.  Please provide a counter-argument or data
showing this is not the case.

> You dont believe Linux has actually been doing QoS all these 
> years before DCE? It has. And we have been separating flows 
> all those years too. 

Indeed it has been.  But the hardware is now getting fast enough and
feature rich enough that the stack needs to mature and use the extra
queues.  Having multiple queues in software, multiple queues in
hardware, and a one-lane tunnel to get between them is not right in my
opinion.  It's like taking a 2-lane highway and putting a 1-lane tunnel
in the middle of it; when traffic gets heavy, everyone is affected,
which is wrong.  That's why they put those neat diamond lanes on
highways.  :)

> Wireless with CSMA/CA is a slightly different beast due to 
> the shared channels; its worse but not very different in 
> nature than the case where you have a shared ethernet hub 
> (CSMA/CD) and you keep adding hosts to it
> - we dont ask the qdiscs to backoff because we have a collision.
> Where i find wireless intriguing is in the case where its 
> available bandwidth adjusts given the signal strength - but 
> you are talking about HOLs not that specific phenomena.

You keep referring to doing things for the "common," but you're giving
specific wireless-based examples with specific packet scheduling
configurations.  I've given 3 scenarios of fairly different traffic
configurations where these patches will help.  Yi Zhu has also replied
that he sees wireless benefiting from these patches, but if you don't
believe that's the case, it's something you guys can hash out.

Thanks,

-PJ
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ