[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1342069601.3265.8218.camel@edumazet-glaptop>
Date: Thu, 12 Jul 2012 07:06:41 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Alexander Duyck <alexander.duyck@...il.com>
Cc: Alexander Duyck <alexander.h.duyck@...el.com>,
netdev@...r.kernel.org, davem@...emloft.net,
jeffrey.t.kirsher@...el.com, Eric Dumazet <edumazet@...gle.com>
Subject: Re: [PATCH 2/2] net: Update alloc frag to reduce get/put page
usage and recycle pages
On Wed, 2012-07-11 at 19:02 -0700, Alexander Duyck wrote:
> The gain will be minimal if any with the 1500 byte allocations, however
> there shouldn't be a performance degradation.
>
> I was thinking more of the ixgbe case where we are working with only 256
> byte allocations and can recycle pages in the case of GRO or TCP. For
> ixgbe the advantages are significant since we drop a number of the
> get_page calls and get the advantage of the page recycling. So for
> example with GRO enabled we should only have to allocate 1 page for
> headers every 16 buffers, and the 6 slots we use in that page have a
> good likelihood of being warm in the cache since we just keep looping on
> the same page.
>
Its not possible to get 16 buffers per 4096 bytes page.
sizeof(struct skb_shared_info)=0x140 320
Add 192 bytes (NET_SKB_PAD + 128)
Thats a minimum of 512 bytes (but ixgbe uses more) per skb.
In practice for ixgbe, its :
#define IXGBE_RXBUFFER_512 512 /* Used for packet split */
#define IXGBE_RX_HDR_SIZE IXGBE_RXBUFFER_512
skb = netdev_alloc_skb_ip_align(rx_ring->netdev, IXGBE_RX_HDR_SIZE)
So 4 buffers per PAGE
Maybe you plan to use IXGBE_RXBUFFER_256 or IXGBE_RXBUFFER_128 ?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists