[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54858753.1070801@citrix.com>
Date: Mon, 8 Dec 2014 11:11:15 +0000
From: David Vrabel <david.vrabel@...rix.com>
To: Luis Henriques <luis.henriques@...onical.com>,
Stefan Bader <stefan.bader@...onical.com>
CC: Wei Liu <wei.liu2@...rix.com>,
Ian Campbell <Ian.Campbell@...rix.com>,
<netdev@...r.kernel.org>, Kamal Mostafa <kamal@...onical.com>,
<linux-kernel@...r.kernel.org>,
Paul Durrant <paul.durrant@...rix.com>,
David Vrabel <david.vrabel@...rix.com>,
Zoltan Kiss <zoltan.kiss@...rix.com>,
<xen-devel@...ts.xenproject.org>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>
Subject: Re: [Xen-devel] [PATCH] xen-netfront: Fix handling packets on compound
pages with skb_linearize
On 08/12/14 10:19, Luis Henriques wrote:
> On Mon, Dec 01, 2014 at 09:55:24AM +0100, Stefan Bader wrote:
>> On 11.08.2014 19:32, Zoltan Kiss wrote:
>>> There is a long known problem with the netfront/netback interface: if the guest
>>> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
>>> it gets dropped. The reason is that netback maps these slots to a frag in the
>>> frags array, which is limited by size. Having so many slots can occur since
>>> compound pages were introduced, as the ring protocol slice them up into
>>> individual (non-compound) page aligned slots. The theoretical worst case
>>> scenario looks like this (note, skbs are limited to 64 Kb here):
>>> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
>>> using 2 slots
>>> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
>>> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
>>> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
>>> Although I don't think this 51 slots skb can really happen, we need a solution
>>> which can deal with every scenario. In real life there is only a few slots
>>> overdue, but usually it causes the TCP stream to be blocked, as the retry will
>>> most likely have the same buffer layout.
>>> This patch solves this problem by linearizing the packet. This is not the
>>> fastest way, and it can fail much easier as it tries to allocate a big linear
>>> area for the whole packet, but probably easier by an order of magnitude than
>>> anything else. Probably this code path is not touched very frequently anyway.
>>>
>>> Signed-off-by: Zoltan Kiss <zoltan.kiss@...rix.com>
>>> Cc: Wei Liu <wei.liu2@...rix.com>
>>> Cc: Ian Campbell <Ian.Campbell@...rix.com>
>>> Cc: Paul Durrant <paul.durrant@...rix.com>
>>> Cc: netdev@...r.kernel.org
>>> Cc: linux-kernel@...r.kernel.org
>>> Cc: xen-devel@...ts.xenproject.org
>>
>> This does not seem to be marked explicitly as stable. Has someone already asked
>> David Miller to put it on his stable queue? IMO it qualifies quite well and the
>> actual change should be simple to pick/backport.
>>
>
> Thank you Stefan, I'm queuing this for the next 3.16 kernel release.
Don't backport this yes. It's broken. It produces malformed requests
and netback will report a fatal error and stop all traffic on the VIF.
David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists