[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <063D6719AE5E284EB5DD2968C1650D6D0F71067C@AcuExch.aculab.com>
Date: Fri, 9 May 2014 13:30:43 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Jon Maloy' <jon.maloy@...csson.com>,
"davem@...emloft.net" <davem@...emloft.net>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Paul Gortmaker <paul.gortmaker@...driver.com>,
"erik.hugne@...csson.com" <erik.hugne@...csson.com>,
"ying.xue@...driver.com" <ying.xue@...driver.com>,
"maloy@...jonn.com" <maloy@...jonn.com>,
"tipc-discussion@...ts.sourceforge.net"
<tipc-discussion@...ts.sourceforge.net>
Subject: RE: [PATCH net-next 1/8] tipc: decrease connection flow control
window
From: Jon Maloy
> Memory overhead when allocating big buffers for data transfer may
> be quite significant. E.g., truesize of a 64 KB buffer turns out
> to be 132 KB, 2 x the requested size.
If the data is in the skb allocated by the ethernet driver then
the cumulative truesize is probably very dependent on the driver.
In some cases the value could be much higher - especially if the
drivers are fixed to report a correct truesize.
> This invalidates the "worst case" calculation we have been
> using to determine the default socket receive buffer limit,
> which is based on the assumption that 1024x64KB = 67MB buffers
> may be queued up on a socket.
>
> Since TIPC connections cannot survive hitting the buffer limit,
> we have to compensate for this overhead.
If the connection can't survive this, then you probably have to
accept the received data anyway.
However I'd have thought you should be able to treat it as equivalent
to a lost ethernet packet.
Sounds a bit like a badly designed protocol to me...
David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists