[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <538DD5F1.6060004@citrix.com>
Date: Tue, 3 Jun 2014 15:04:33 +0100
From: Zoltan Kiss <zoltan.kiss@...rix.com>
To: Paul Durrant <Paul.Durrant@...rix.com>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
Ian Campbell <Ian.Campbell@...rix.com>,
"Wei Liu" <wei.liu2@...rix.com>,
"linux@...elenboom.it" <linux@...elenboom.it>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
David Vrabel <david.vrabel@...rix.com>,
"davem@...emloft.net" <davem@...emloft.net>
Subject: Re: [PATCH net] xen-netback: Fix slot estimation
On 03/06/14 14:37, Paul Durrant wrote:
>> -----Original Message-----
>> From: Zoltan Kiss
>> Sent: 03 June 2014 14:32
>> To: xen-devel@...ts.xenproject.org; Ian Campbell; Wei Liu; Paul Durrant;
>> linux@...elenboom.it
>> Cc: netdev@...r.kernel.org; David Vrabel; davem@...emloft.net; Zoltan
>> Kiss
>> Subject: [PATCH net] xen-netback: Fix slot estimation
>>
>> A recent commit (a02eb4 "xen-netback: worse-case estimate in
>> xenvif_rx_action is
>> underestimating") capped the slot estimation to MAX_SKB_FRAGS, but that
>> triggers
>> the next BUG_ON a few lines down, as the packet consumes more slots than
>> estimated.
>> This patch remove that cap, and if the frontend doesn't provide enough slot,
>> put back the skb to the top of the queue and caps rx_last_skb_slots. When
>> the
>> next try also fails, it drops the packet.
>> Capping rx_last_skb_slots is needed because if the frontend never gives
>> enough
>> slots, the ring gets stalled.
>>
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@...rix.com>
>> Cc: Paul Durrant <paul.durrant@...rix.com>
>> Cc: Wei Liu <wei.liu2@...rix.com>
>> Cc: Ian Campbell <ian.campbell@...rix.com>
>> Cc: David Vrabel <david.vrabel@...rix.com>
>>
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
>> netback/netback.c
>> index da85ffb..7164157 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -600,13 +600,6 @@ static void xenvif_rx_action(struct xenvif *vif)
>> PAGE_SIZE);
>> }
>>
>> - /* To avoid the estimate becoming too pessimal for some
>> - * frontends that limit posted rx requests, cap the estimate
>> - * at MAX_SKB_FRAGS.
>> - */
>> - if (max_slots_needed > MAX_SKB_FRAGS)
>> - max_slots_needed = MAX_SKB_FRAGS;
>> -
>> /* We may need one more slot for GSO metadata */
>> if (skb_is_gso(skb) &&
>> (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 ||
>> @@ -615,9 +608,27 @@ static void xenvif_rx_action(struct xenvif *vif)
>>
>> /* If the skb may not fit then bail out now */
>> if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
>> + /* If the skb needs more than MAX_SKB_FRAGS
>> slots, it
>> + * can happen that the frontend never gives us
>> enough.
>> + * To avoid spining on that packet, first we put it back
>> + * to the top of the queue, but if the next try fail,
>> + * we drop it.
>> + */
>> + if (max_slots_needed > MAX_SKB_FRAGS &&
>> + vif->rx_last_skb_slots == MAX_SKB_FRAGS) {
>
> Isn't it sufficient to say:
>
> if (vif->rx_last_skb_slots != 0)
>
> here? We should not ordinarily wake before the requisite number of slots is available.
Yep, that would be enough
>
> Paul
>
>> + kfree_skb(skb);
>> + vif->rx_last_skb_slots = 0;
>> + continue;
>> + }
>> skb_queue_head(&vif->rx_queue, skb);
>> need_to_notify = true;
>> - vif->rx_last_skb_slots = max_slots_needed;
>> + /* Cap this otherwise if the guest never gives us
>> + * enough slot, rx_work_todo will spin
>> + */
>> + vif->rx_last_skb_slots =
>> + max_slots_needed > MAX_SKB_FRAGS ?
>> + MAX_SKB_FRAGS :
>> + max_slots_needed;
>> break;
>> } else
>> vif->rx_last_skb_slots = 0;
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists