[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACVXFVPmec5_3Up9qM4iA90Xua9J_E-aRT-2g7Hu7TR4zRKQtA@mail.gmail.com>
Date: Tue, 10 Jul 2012 22:22:45 +0800
From: Ming Lei <tom.leiming@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Network Development <netdev@...r.kernel.org>,
David Miller <davem@...emloft.net>
Subject: Re: TCP transmit performance regression
On Tue, Jul 10, 2012 at 10:02 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> I am kind of annoyed you sent on netdev a copy of a _private_ mail.
I am sure that your reply which includes below is not from a private mail:
Only because skbs were fat (8KB allocated/truesize, for a single
1500 bytes frame)
>
> Next time, make sure you dont do that without my consent.
OK
> On Tue, 2012-07-10 at 21:37 +0800, Ming Lei wrote:
>
>> Could you explain why the truesize of SKB is 8KB for single
>> 1500bytes frame?
>>
>
> Because the driver uses skb_alloc(4096) for example ?
>
> I don't know, you don't tell us the driver.
>
>
> Goal is to have skb->head points to a 2048 bytes area, so truesize
> should be 2048 + sizeof(sk_buff) (including struct shared_info)
>
>> I observed it is 2560bytes for RX SKBs inside asix_rx_fixup with
>> rx_urb_size of 2048 on beagle-xm.
>>
>
> Thats because using 2048 bytes for the urb buffer (excluding
> shared_info) means you need :
>
> 2048 + sizeof(struct shared_info) + sizeof(sk_buff) = 2560
>
> In fact 2048 + sizeof(struct shared_info) means a full 4096 area is
> used.
>
> You have 2560 on recent kernels because the way netdev_alloc_frag()
> works.
>
> Thats why copybreak can actually saves ram. Since it is adding a copy,
> we try to use it only on slow devices.
Looks single page allocation won't put too much pressure on MM, that is
why I suggested to avoid copy if the skb buffer size is less or equal one
page. Anyway, unnecessary copy will increase computation and consume power.
Thanks,
--
Ming Lei
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists