[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161118181004.4c15a6a1@laptop>
Date: Fri, 18 Nov 2016 18:10:04 -0800
From: Jakub Kicinski <kubakici@...pl>
To: John Fastabend <john.fastabend@...il.com>
Cc: tgraf@...g.ch, shm@...ulusnetworks.com,
alexei.starovoitov@...il.com, daniel@...earbox.net,
davem@...emloft.net, john.r.fastabend@...el.com,
netdev@...r.kernel.org, bblanco@...mgrid.com, brouer@...hat.com
Subject: Re: [PATCH 4/5] virtio_net: add dedicated XDP transmit queues
On Fri, 18 Nov 2016 13:09:53 -0800, Jakub Kicinski wrote:
> Looks very cool! :)
>
> On Fri, 18 Nov 2016 11:00:41 -0800, John Fastabend wrote:
> > @@ -1542,12 +1546,34 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog)
> > return -EINVAL;
> > }
> >
> > + curr_qp = vi->curr_queue_pairs - vi->xdp_queue_pairs;
> > + if (prog)
> > + xdp_qp = num_online_cpus();
>
> Is num_online_cpus() correct here?
Sorry, I don't know the virto_net code, so I'm probably wrong. I was
concerned whether the number of cpus can change but also that the cpu
mask may be sparse and therefore offsetting by smp_processor_id()
into the queue table below could bring trouble.
@@ -353,9 +381,15 @@ static u32 do_xdp_prog(struct virtnet_info *vi,
switch (act) {
case XDP_PASS:
return XDP_PASS;
+ case XDP_TX:
+ qp = vi->curr_queue_pairs -
+ vi->xdp_queue_pairs +
+ smp_processor_id();
+ xdp.data = buf + (vi->mergeable_rx_bufs ? 0 : 4);
+ virtnet_xdp_xmit(vi, qp, &xdp);
+ return XDP_TX;
default:
bpf_warn_invalid_xdp_action(act);
- case XDP_TX:
case XDP_ABORTED:
case XDP_DROP:
return XDP_DROP;
Powered by blists - more mailing lists