[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0810311106140.7072@wrl-59.cs.helsinki.fi>
Date: Fri, 31 Oct 2008 11:16:21 +0200 (EET)
From: "Ilpo Järvinen" <ilpo.jarvinen@...sinki.fi>
To: David Miller <davem@...emloft.net>
cc: zbr@...emap.net, Netdev <netdev@...r.kernel.org>, efault@....de,
mingo@...e.hu, a.p.zijlstra@...llo.nl,
Herbert Xu <herbert@...dor.apana.org.au>
Subject: Re: tbench wrt. loopback TSO
On Fri, 31 Oct 2008, David Miller wrote:
> From: "Ilpo Järvinen" <ilpo.jarvinen@...sinki.fi>
> Date: Tue, 28 Oct 2008 00:17:00 +0200 (EET)
>
> > > Another modulo sits in tcp_mss_split_point().
> >
> > I know it's there but it should occur not that often.
>
> It happens every sendmsg() call, and all tbench does is small send,
> small recv, repeat.
This is not true for the mss_split_point case which I was speaking of, I
even explained this in the part following that first sentence which you
choose to snip away:
If you have pcount == 1 (len <= mss implies that) that won't even execute
and rest of the cases involve handling rwin limited & nagle. I suppose we
could make nagle to work without splitting the skb to sub-mss (needs some
auditting to find out if something else than snd_sml setting assumes
skb->len < mss, nagle check probably as well but I don't remember w/o
looking).
To reiterate, with small send you have pcount == 1 and that won't execute
any of the mss_split_point code!
The tcp_current_mss code is a different beast (if you meant that, I
didn't :-)), yes it executes every so often but I was under impression
that we agreed on it already. :-)
> So with TSO on, a small increase in performance is no surprise, as
> we will save some cycles often enough.
Also tcp_send_fin seems to do tcp_current_mss(sk, 1) so it's at least two
modulos per transaction...
--
i.
Powered by blists - more mailing lists