[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <538E2EFF.7000103@citrix.com>
Date: Tue, 3 Jun 2014 21:24:31 +0100
From: Zoltan Kiss <zoltan.kiss@...rix.com>
To: David Laight <David.Laight@...LAB.COM>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
"ian.campbell@...rix.com" <ian.campbell@...rix.com>,
"wei.liu2@...rix.com" <wei.liu2@...rix.com>,
"paul.durrant@...rix.com" <paul.durrant@...rix.com>,
"linux@...elenboom.it" <linux@...elenboom.it>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"david.vrabel@...rix.com" <david.vrabel@...rix.com>,
"davem@...emloft.net" <davem@...emloft.net>
Subject: Re: [PATCH net] xen-netback: Fix slot estimation
On 03/06/14 14:52, David Laight wrote:
> From: netdev-owner@...r.kernel.org
>> @@ -615,9 +608,27 @@ static void xenvif_rx_action(struct xenvif *vif)
>>
>> /* If the skb may not fit then bail out now */
>> if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
>> + /* If the skb needs more than MAX_SKB_FRAGS slots, it
>> + * can happen that the frontend never gives us enough.
>> + * To avoid spining on that packet, first we put it back
>> + * to the top of the queue, but if the next try fail,
>> + * we drop it.
>> + */
>> + if (max_slots_needed > MAX_SKB_FRAGS &&
>> + vif->rx_last_skb_slots == MAX_SKB_FRAGS) {
>> + kfree_skb(skb);
>> + vif->rx_last_skb_slots = 0;
>> + continue;
>> + }
>
> A silent discard here doesn't seem right at all.
> While it stops the kernel crashing, or the entire interface locking
> up; it is likely to leave one connection 'stuck' - a TCP retransmission
> is likely to include the same fragments.
> From a user point of view this as almost as bad.
Yes, we are aware of this problem for a while. However I have an idea to
solve that in a way that we don't lose performance, and these packets
can pass through as well. See my patch called "Fix handling of skbs
requiring too many slots"
Zoli
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists