[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1430350786.3711.77.camel@edumazet-glaptop2.roam.corp.google.com>
Date: Wed, 29 Apr 2015 16:39:46 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Alexei Starovoitov <ast@...mgrid.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Daniel Borkmann <daniel@...earbox.net>,
Thomas Graf <tgraf@...g.ch>,
Jamal Hadi Salim <jhs@...atatu.com>,
John Fastabend <john.r.fastabend@...el.com>,
netdev@...r.kernel.org
Subject: Re: [PATCH RFC net-next] pktgen: introduce 'rx' mode
On Wed, 2015-04-29 at 16:28 -0700, Alexei Starovoitov wrote:
> On 4/29/15 3:56 PM, Eric Dumazet wrote:
> >
> > So pktgen in RX mode MUST deliver skb with skb->users = 1, there is no
> > way around it.
>
> if I only knew how to do it...
> The cost of continuously allocating skbs is way higher than
> netif_receive_skb itself. Such benchmarking tool would measure the
> speed of skb alloc/free instead of speed of netif_receive_skb.
> Are you suggesting to pre-allocate 10s of millions of skbs and
> then feed them in one go? The profile will be dominated by
> cache misses in the first few lines of __netif_receive_skb_core()
> where it accesses skb->dev,data,head. Doesn't sound too useful either.
> Other thoughts?
The code I copy pasted from your patch is buggy. Not the whole thing.
You have to replace it by something smarter.
This should not be hard really.
Zap the 'burst/clone' thing, this is not going to work for RX.
TX was okay, not RX.
You could for instance do :
atomic_inc(&skb->users);
netif_receive_skb(skb);
if (atomic_read(skb->users) != 1) {
/* This is too bad, I can not recycle this skb because it is still used */
consume_skb(skb);
/* allocate a fresh new skb */
skb = ...
} else {
/* Yeah ! Lets celebrate, cost of reusing this skb was one atomic op */
}
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists