[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4755C3E9.4090609@psc.edu>
Date: Tue, 04 Dec 2007 16:17:29 -0500
From: John Heffner <jheffner@....edu>
To: Ilpo Järvinen <ilpo.jarvinen@...sinki.fi>
CC: David Miller <davem@...emloft.net>,
Netdev <netdev@...r.kernel.org>, Matt Mathis <mathis@....edu>
Subject: Re: [PATCH net-2.6 0/3]: Three TCP fixes
Ilpo Järvinen wrote:
> On Tue, 4 Dec 2007, John Heffner wrote:
>
>> Ilpo Järvinen wrote:
>>> ...I'm still to figure out why tcp_cwnd_down uses snd_ssthresh/2
>>> as lower bound even though the ssthresh was already halved, so snd_ssthresh
>>> should suffice.
>> I remember this coming up at least once before, so it's probably worth a
>> comment in the code. Rate-halving attempts to actually reduce cwnd to half
>> the delivered window. Here, cwnd/4 (ssthresh/2) is a lower bound on how far
>> rate-halving can reduce cwnd. See the "Bounding Parameters" section of
>> <http://www.psc.edu/networking/papers/FACKnotes/current/>.
>
> Thanks for the info! Sadly enough it makes NewReno recovery quite
> inefficient when there are enough losses and high BDP link (in my case
> 384k/200ms, BDP sized buffer). There might be yet another bug in it as
> well (it is still a bit unclear how tcp variables behaved during my
> scenario and I'll investigate further) but reduction in the transfer
> rate is going to last longer than a short moment (which is used as
> motivation in those FACK notes). In fact, if I just use RFC2581 like
> setting w/o rate-halving (and experience the initial "pause" in sending),
> the ACK clock to send out new data works very nicely beating rate halving
> fair and square. For SACK/FACK it works much nicer because recovery is
> finished much earlier and slow start recovers cwnd quickly.
I believe this is exactly the reason why Matt (CC'd) and Jamshid
abandoned this line of work in the late 90's. In my opinion, it's
probably not such a bad idea to use cwnd/2 as the bound. In some
situations, the current rate-halving code will work better, but as you
point out, in others the cwnd is lowered too much.
> ...Mind if I ask another similar one, any idea why prior_ssthresh is
> smaller (3/4 of it) than cwnd used to be (see tcp_current_ssthresh)?
Not sure on that one. I'm not aware of any publications this is based
on. Maybe Alexey knows?
-John
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists