[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <538F4271.7080505@oracle.com>
Date: Wed, 04 Jun 2014 11:59:45 -0400
From: annie li <annie.li@...cle.com>
To: Zoltan Kiss <zoltan.kiss@...rix.com>
CC: wei.liu2@...rix.com, ian.campbell@...rix.com,
netdev@...r.kernel.org, linux@...elenboom.it,
paul.durrant@...rix.com, david.vrabel@...rix.com,
xen-devel@...ts.xenproject.org, davem@...emloft.net
Subject: Re: [Xen-devel] [PATCH net] xen-netback: Fix handling of skbs requiring
too many slots
On 2014/6/4 11:42, Zoltan Kiss wrote:
> On 04/06/14 16:09, annie li wrote:
>>
>> On 2014/6/3 16:30, Zoltan Kiss wrote:
>>> A recent commit (a02eb4 "xen-netback: worse-case estimate in
>>> xenvif_rx_action is
>>> underestimating") capped the slot estimation to MAX_SKB_FRAGS, but
>>> that triggers
>>> the next BUG_ON a few lines down, as the packet consumes more slots
>>> than
>>> estimated.
>>> This patch introduces full_coalesce on the skb callback buffer, which
>>> is used in
>>> start_new_rx_buffer() to decide whether netback needs coalescing more
>>> aggresively. By doing that, no packet should need more than
>>> XEN_NETIF_MAX_TX_SIZE / PAGE_SIZE data slots,
>>
>> (XEN_NETIF_MAX_TX_SIZE+1) / PAGE_SIZE here?
>
> Do you think about the GSO slot? That's why I wrote "data slot",
> however that's probably not a clear terminology.
What I mean is: XEN_NETIF_MAX_TX_SIZE is 0xFFFF, and
XEN_NETIF_MAX_TX_SIZE / PAGE_SIZE turns out to be 15 slots when
PAGE_SIZE is 4096. You was trying to use XEN_NETIF_MAX_TX_SIZE as max
size of packet - 64k?
> I'll add then that excluding GSO slot, as it doesn't carry data
> directly, therefore it's irrelevant from this point of view.
Correct.:-)
Thanks
Annie
>
> Zoli
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@...ts.xen.org
> http://lists.xen.org/xen-devel
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists