lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 7 Dec 2011 14:27:57 +0100
From:	Dave Taht <dave.taht@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	"John A. Sullivan III" <jsullivan@...nsourcedevel.com>,
	netdev@...r.kernel.org, Rick Jones <rick.jones2@...com>
Subject: Re: Latency difference between fifo and pfifo_fast

On Wed, Dec 7, 2011 at 2:04 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> Le mardi 06 décembre 2011 à 14:44 -0500, John A. Sullivan III a écrit :
>> Interesting.  Would that still be true if all the traffic is the same,
>> i.e., nothing but iSCSI packets on the network? Or would just dumping
>> packets with minimal processing be fastest? Thanks - John
>
> Dave focuses on fairness and latencies under ~20 ms (a typical (under)
> provisioned ADSL (up)link shared by many (hostile) flows, with various
> type of services)

True, that is my focus, but queuing theory applies at all time scales.

If it didn't, the universe, and not just the internet, would have melted
down long ago.

And I did ask specifically what sort of latencies he was trying to address.

If he's hovering at close to line rate (wow), and yet experiencing
serious delays on short traffic, perhaps what I describe below may apply.

> I doubt this is your concern ? You want high throughput more than low
> latencies ...

My assumption  - is that your 'iSCSI' packets are TCP streams. If they aren't,
then some of what I say below does not apply, although I tend to be
a believer in FQ technologies for their effects on downstream buffering.

I freely confess to not grokking how iSCSI is deployed. My understanding
is that TCP is used to negotiate a virtual connection between two endpoints,
and there are usually very few - usually one - endpoints.

1) TCP grabs all the bandwidth it can. If you have no packet loss,
it will eat more bandwidth, as rapidly as it can ramp up. Until it eventually
has packet loss.

Q) John indicated he didn't want any packet loss, so I for starters questioned
my assumption he was using tcp, and secondly it was late, and I was
feeling snarky.  I honestly should stay in the .2ms to 10000ms range I'm
comfortable in.

2) Once you have one stream so completely dominating a connection
it can starve other stream's attempts to ramp up.

> Your workload is probably under _one_ ms latencies, and dedicated link
> to address few targets.

That was my second question, basically, how many links are in use?

More than 1 introduces a head of line problem between flows.

> If you have to use a Qdisc (and expensive packet classification), then
> something is wrong in your iSCSI network connectivity :)
>
> Please note that with BQL, the NIC TX ring size doesn’t matter, and you
> could get "Virtual device ethX asks to queue packet!" warnings in your
> message log.

so his tx 4000 is 'about right', even without BQL?

>
> So before removing Qdisc, you also want to make sure BQL is disabled for
> your NIC device/queues.
> (BQL is scheduled for linux-3.3)
>
>
>



-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
FR Tel: 0638645374
http://www.bufferbloat.net
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ