lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 24 Jun 2011 11:53:16 -0400
From:	Thomas Graf <tgraf@...radead.org>
To:	Vladislav Yasevich <vladislav.yasevich@...com>
Cc:	Sridhar Samudrala <sri@...ibm.com>, linux-sctp@...r.kernel.org,
	netdev@...r.kernel.org
Subject: Re: [PATCH] sctp: Reducing rwnd by sizeof(struct sk_buff) for each
 CHUNK is too aggressive

On Fri, Jun 24, 2011 at 11:21:11AM -0400, Vladislav Yasevich wrote:
> First, let me state that I mis-understood what the patch is attempting to do.
> Looking again, I understand this a little better, but still have reservations.

This explains a lot :)

> If we treat the window as strictly available data, then we may end up sending a lot more traffic
> then the window can take thus causing us to enter 0 window probe and potential retransmission
> issues that will trigger congestion control.  
> We'd like to avoid that so we put some overhead into our computations.  It may not be ideal
> since we do this on a per-chunk basis.  It could probably be done on per-packet basis instead.
> This way, we'll essentially over-estimate but under-subscribe our current view of the peers
> window.  So in one shot, we are not going to over-fill it and will get an updated view next
> time the SACK arrives.

I will update my patch to include a per packet overhead and also fix the retransmission
rwnd reopening to do the same.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists