[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1286813133.2737.36.camel@edumazet-laptop>
Date: Mon, 11 Oct 2010 18:05:33 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Stanislaw Gruszka <sgruszka@...hat.com>
Cc: David Miller <davem@...emloft.net>,
Francois Romieu <romieu@...zoreil.com>, netdev@...r.kernel.org
Subject: Re: [PATCH] net: introduce alloc_skb_order0
Le lundi 11 octobre 2010 à 17:55 +0200, Stanislaw Gruszka a écrit :
> On Sat, Oct 09, 2010 at 05:59:56PM +0200, Eric Dumazet wrote:
> > Le vendredi 08 octobre 2010 à 18:03 +0200, Stanislaw Gruszka a écrit :
> > > On Fri, Oct 08, 2010 at 05:04:07PM +0200, Eric Dumazet wrote:
> >
> > > > Switch to SLAB -> no more problem ;)
> > >
> > > yeh, I wish to, but fedora use SLUB because of some debugging
> > > capabilities.
> >
> > Yes, of course, I was kidding :)
> >
> > echo 0 >/sys/kernel/slab/kmalloc-2048/order
> > echo 0 >/sys/kernel/slab/kmalloc-1024/order
> > echo 0 >/sys/kernel/slab/kmalloc-512/order
> >
> > Should do the trick : No more high order allocations for MTU=1500
> > frames.
>
> So the SLUB is great, but we need a patch to avoid using it :-)
>
> > For MTU=9000 frames, we probably need something like this patch :
> >
> > Reception of big frames hit a memory allocation problem, because of high
> > order pages allocations (order-3 sometimes for MTU=9000). This patch
> > introduces alloc_skb_order0(), to build skbs with order-0 pages only.
>
> I had never seen allocation problems in rtl8169_try_rx_copy or in any
> other driver rx path (except iwlwifi, but now this is solved by using
> skb_add_rx_frag), so I'm not sure if need this patch.
>
> However I see other benefit of that patch. We save memory. Allocating
> for MTU 9000 gives something like skb->data = kmalloc(9000 + 32 + 2
> + 334). So we take data from kmalloc-16384 cache, we waste about 7kB on
> every allocation. With patch wastage would be about 2k per allocation
> (assuming 4kB and 8kB page size)
>
> However I started this thread thinking about other memory wastage,
> in rtl8169_alloc_rx_skb, skb->data = kmalloc(16383 + 32 + 2 + 334), taken
> from kmalloc-32768, almost 50% wastage.
>
You cannot use my patch to avoid this waste. Really.
You have two different things in this driver :
1) Allocation of a physically continous 16Kbytes bloc for the rx-ring,
at device initialization (GFP_KERNEL OK here)
Here, the only thing you could do is to not allocate real skbs but
only 16KB data blocs (no need for the sk_buf, only the ->data part), and
force copybreak for all incoming packets (remove the rx_copybreak
tunable)
2) Allocation of order0 skb to perform the copybreak in rx path.
(GFP_ATOMIC) : My patch.
> > +struct sk_buff *alloc_skb_order0(int pkt_size)
> > +{
> > + int head = min_t(int, pkt_size, SKB_MAX_HEAD(NET_SKB_PAD + NET_IP_ALIGN));
> > + struct sk_buff *skb;
> > +
> > + skb = alloc_skb(head + NET_SKB_PAD + NET_IP_ALIGN,
> > + GFP_ATOMIC | __GFP_NOWARN);
> > + if (!skb)
> > + return NULL;
> > + skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
> > + skb_put(skb, head);
> > + pkt_size -= head;
> > +
> > + skb->len += pkt_size;
> > + skb->data_len += pkt_size;
> > + skb->truesize += pkt_size;
> > + while (pkt_size) {
>
> if (skb_shinfo(skb)->nr_frags == MAX_SKB_FRAGS - 1)
> goto error;
Not needed. A frame is < 16383 bytes, so _must_ fit in an skb,
(skb can hold up to 64 Kbytes)
>
> > + int i = skb_shinfo(skb)->nr_frags++;
> > + skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
> > + int fragsize = min_t(int, pkt_size, PAGE_SIZE);
> > + struct page *page = alloc_page(GFP_NOWAIT | __GFP_NOWARN);
> > +
> > + if (!page)
> > + goto error;
> > + frag->page = page;
> > + frag->size = fragsize;
> > + frag->page_offset = 0;
> > + pkt_size -= fragsize;
> > + }
> > + return skb;
> > +
> > +error:
> > + kfree_skb(skb);
> > + return NULL;
> > +}
> > +EXPORT_SYMBOL(alloc_skb_order0);
> > +
> > /* Checksum skb data. */
> >
> > __wsum skb_checksum(const struct sk_buff *skb, int offset,
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists