lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 6 Apr 2008 18:43:02 -0400
From:	"Sangtae Ha" <sangtae.ha@...il.com>
To:	"Wenji Wu" <wenji@...l.gov>
Cc:	"Ilpo Järvinen" <ilpo.jarvinen@...sinki.fi>,
	"John Heffner" <johnwheffner@...il.com>,
	Netdev <netdev@...r.kernel.org>
Subject: Re: A Linux TCP SACK Question

When our 40 students had the same lab experiment comparing between
TCP-SACK and TCP-NewReno, they had come up with similar results. The
settings are identical to your setting (one linux sender, one linux
receiver, and one nettem machine in between) . When we introduced some
loss using a nettem, TCP-SACK showed a bit better performance while
they had similar throughput most of cases.

I don't think reorderings frequently happened in your directly
connected networking scenario. Please post your tcpdump file for
clearing out all doubts.

Sangtae

On 4/6/08, Wenji Wu <wenji@...l.gov> wrote:
>
>
> > Can you run the attached script and run your testing again?
> > I think it might be the problem of your dual cores balance the
> > interrupts on your testing NIC.
> > As we do a lot of things with SACK, cache misses and etc. might affect
> > your performance.
> >
> > In default setting, I disabled tcp segment offload and did a smp
> > affinity setting to CPU 0.
> > Please change "INF" to your interface name and let us know the results.
>
> I bound the network interrupts and iperf both the CPU0, and CPU0 will be ilde most of the time. The results are still the same.
>
> At this throughput level, the SACK processing won't take much CPU.
>
> It is not the interrupt/cpu affinity that cause the difference.
>
> I am beleving that it is the ACK reordering that cuase the confusion in the sender, which lead the sender uncecessarily to reduce CWND or REORDERING_THRESHOLD.
>
> wenji
>


-- 
----------------------------------------------------------------
 Sangtae Ha, http://www4.ncsu.edu/~sha2
 PhD. Student,
 Department of Computer Science,
 North Carolina State University, USA
----------------------------------------------------------------
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ