[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20150504.162134.1761886048618680717.davem@davemloft.net>
Date: Mon, 04 May 2015 16:21:34 -0400 (EDT)
From: David Miller <davem@...emloft.net>
To: sowmini.varadhan@...cle.com
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
chien.yen@...cle.com, rds-devel@....oracle.com,
ajaykumar.hotchandani@...cle.com
Subject: Re: [PATCH 1/2] net/rds: RDS-TCP: Always create a new rds_sock for
an incoming connection.
From: Sowmini Varadhan <sowmini.varadhan@...cle.com>
Date: Mon, 4 May 2015 15:29:08 -0400
> On (05/04/15 14:47), David Miller wrote:
>>
>> I think adding 64K of data to this module just to solve this rare
>> issue is excessive.
>
> I'd based that number mostly as a heuristic based on rds_conn_hash[].
> Any suggestions for what's reasonable? 8K? Less?
> (BTW, I think that should be 32K, or am I mis-counting?)
No table at all.
There has to be another way to notice this kind of situation, how
for example does NFS or any other sunrpc using service handle this
case?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists