[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E0B3DA1.9060200@hp.com>
Date: Wed, 29 Jun 2011 10:58:41 -0400
From: Vladislav Yasevich <vladislav.yasevich@...com>
To: netdev@...r.kernel.org, davem@...emloft.net,
Wei Yongjun <yjwei@...fujitsu.com>,
Sridhar Samudrala <sri@...ibm.com>, linux-sctp@...r.kernel.org
Subject: Re: [PATCH] sctp: Enforce maximum retransmissions during shutdown
On 06/29/2011 10:36 AM, Thomas Graf wrote:
> On Wed, Jun 29, 2011 at 10:20:01AM -0400, Vladislav Yasevich wrote:
>> I think in this particular case, the receiver has to terminate, not the sender.
>> Look at how tcp_close() handles this.
>>
>> As long as receiver is available, the sender should continue to try
>> sending data.
>
> The receiver does not know that the sender wishes to shutdown the
> association. No shutdown request has been sent yet.
>
> I don't think we should be relying on the behaviour of the sender for
> the receiver to be able to ever free its ressources. We will be
> retransmitting data and keeping an association alive _forever_ for no
> purpose.
>
> If there is no reliable way of _ever_ doing a graceful shutdown then
> the only alternative is to just ABORT in the first place.
>
> The difference in TCP is that we can close the connection half-way,
> something we can't do in sctp.
>
But what you are proposing violates the protocol. Zero-window probes do
not count against max retransmissions, even when you are in shutdown pending
state.
You'll come out of this one of 2 ways:
1) receiver wakes up and processes data. This will allow for graceful termination.
2) receiver dies. Since receive window is full, we have data queued, and this will
trigger an ABORT to be sent to the sender.
What you patch is doing is taking a perfectly valid scenario and putting a time limit
on it in violation of the spec.
-vlad
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists