[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53764222.6070705@citrix.com>
Date: Fri, 16 May 2014 17:51:46 +0100
From: Zoltan Kiss <zoltan.kiss@...rix.com>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: Wei Liu <wei.liu2@...rix.com>, <netdev@...r.kernel.org>,
<xen-devel@...ts.xen.org>, David Vrabel <david.vrabel@...rix.com>,
Konrad Wilk <konrad.wilk@...cle.com>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>,
Stefan Bader <stefan.bader@...onical.com>
Subject: Re: [PATCH net-next] xen-netfront: try linearizing SKB if it occupies
too many slots
On 16/05/14 17:47, Eric Dumazet wrote:
> On Fri, 2014-05-16 at 17:29 +0100, Zoltan Kiss wrote:
>> On 16/05/14 16:34, Wei Liu wrote:
>>>
>>> It works, at least in this Redis testcase. Could you explain a bit where
>>> this 56000 magic number comes from? :-)
>>>
>>> Presumably I can derive it from some constant in core network code?
>>
>> I guess it just makes more unlikely to have packets with problematic layout. But the following packet would still fail:
>> linear buffer : 80 bytes, on 2 pages
>> 17 frags, 80 bytes each, each spanning over page boundary.
>
> How would you build such skbs ? Its _very_ difficult, you have to be
> very very smart to hit this.
I wouldn't build such skbs, I would expect the network stack to create
such weird things sometimes :)
The goal here is to prepare and handle the worst case scenarios as well.
>
> Also reducing gso_max_size made sure order-5 allocations would not be
> attempted in this unlikely case.
But reducing the gso_max_size would have a bad impact on the general
network throughput, right?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists