lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 31 Oct 2008 11:16:21 +0200 (EET)
From:	"Ilpo Järvinen" <>
To:	David Miller <>
cc:, Netdev <>,,,,
	Herbert Xu <>
Subject: Re: tbench wrt. loopback TSO

On Fri, 31 Oct 2008, David Miller wrote:

> From: "Ilpo Järvinen" <>
> Date: Tue, 28 Oct 2008 00:17:00 +0200 (EET)
> > > Another modulo sits in tcp_mss_split_point().
> > 
> > I know it's there but it should occur not that often.
> It happens every sendmsg() call, and all tbench does is small send,
> small recv, repeat.

This is not true for the mss_split_point case which I was speaking of, I 
even explained this in the part following that first sentence which you 
choose to snip away:

If you have pcount == 1 (len <= mss implies that) that won't even execute 
and rest of the cases involve handling rwin limited & nagle. I suppose we 
could make nagle to work without splitting the skb to sub-mss (needs some 
auditting to find out if something else than snd_sml setting assumes 
skb->len < mss, nagle check probably as well but I don't remember w/o 

To reiterate, with small send you have pcount == 1 and that won't execute 
any of the mss_split_point code!

The tcp_current_mss code is a different beast (if you meant that, I 
didn't :-)), yes it executes every so often but I was under impression 
that we agreed on it already. :-)

> So with TSO on, a small increase in performance is no surprise, as
> we will save some cycles often enough.

Also tcp_send_fin seems to do tcp_current_mss(sk, 1) so it's at least two 
modulos per transaction...


Powered by blists - more mailing lists