[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210415202132.7b5e8d0d@carbon>
Date: Thu, 15 Apr 2021 20:21:32 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Martin KaFai Lau <kafai@...com>
Cc: Toke Høiland-Jørgensen <toke@...hat.com>,
Hangbin Liu <liuhangbin@...il.com>, <bpf@...r.kernel.org>,
<netdev@...r.kernel.org>, Jiri Benc <jbenc@...hat.com>,
Eelco Chaudron <echaudro@...hat.com>, <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
David Ahern <dsahern@...il.com>,
Andrii Nakryiko <andrii.nakryiko@...il.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
John Fastabend <john.fastabend@...il.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Björn Töpel
<bjorn.topel@...il.com>, brouer@...hat.com
Subject: Re: [PATCHv7 bpf-next 1/4] bpf: run devmap xdp_prog on flush
instead of bulk enqueue
On Thu, 15 Apr 2021 10:35:51 -0700
Martin KaFai Lau <kafai@...com> wrote:
> On Thu, Apr 15, 2021 at 11:22:19AM +0200, Toke Høiland-Jørgensen wrote:
> > Hangbin Liu <liuhangbin@...il.com> writes:
> >
> > > On Wed, Apr 14, 2021 at 05:17:11PM -0700, Martin KaFai Lau wrote:
> > >> > static void bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags)
> > >> > {
> > >> > struct net_device *dev = bq->dev;
> > >> > - int sent = 0, err = 0;
> > >> > + int sent = 0, drops = 0, err = 0;
> > >> > + unsigned int cnt = bq->count;
> > >> > + int to_send = cnt;
> > >> > int i;
> > >> >
> > >> > - if (unlikely(!bq->count))
> > >> > + if (unlikely(!cnt))
> > >> > return;
> > >> >
> > >> > - for (i = 0; i < bq->count; i++) {
> > >> > + for (i = 0; i < cnt; i++) {
> > >> > struct xdp_frame *xdpf = bq->q[i];
> > >> >
> > >> > prefetch(xdpf);
> > >> > }
> > >> >
> > >> > - sent = dev->netdev_ops->ndo_xdp_xmit(dev, bq->count, bq->q, flags);
> > >> > + if (bq->xdp_prog) {
> > >> bq->xdp_prog is used here
> > >>
> > >> > + to_send = dev_map_bpf_prog_run(bq->xdp_prog, bq->q, cnt, dev);
> > >> > + if (!to_send)
> > >> > + goto out;
> > >> > +
> > >> > + drops = cnt - to_send;
> > >> > + }
> > >> > +
> > >>
> > >> [ ... ]
> > >>
> > >> > static void bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf,
> > >> > - struct net_device *dev_rx)
> > >> > + struct net_device *dev_rx, struct bpf_prog *xdp_prog)
> > >> > {
> > >> > struct list_head *flush_list = this_cpu_ptr(&dev_flush_list);
> > >> > struct xdp_dev_bulk_queue *bq = this_cpu_ptr(dev->xdp_bulkq);
> > >> > @@ -412,18 +466,22 @@ static void bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf,
> > >> > /* Ingress dev_rx will be the same for all xdp_frame's in
> > >> > * bulk_queue, because bq stored per-CPU and must be flushed
> > >> > * from net_device drivers NAPI func end.
> > >> > + *
> > >> > + * Do the same with xdp_prog and flush_list since these fields
> > >> > + * are only ever modified together.
> > >> > */
> > >> > - if (!bq->dev_rx)
> > >> > + if (!bq->dev_rx) {
> > >> > bq->dev_rx = dev_rx;
> > >> > + bq->xdp_prog = xdp_prog;
> > >> bp->xdp_prog is assigned here and could be used later in bq_xmit_all().
> > >> How is bq->xdp_prog protected? Are they all under one rcu_read_lock()?
> > >> It is not very obvious after taking a quick look at xdp_do_flush[_map].
> > >>
> > >> e.g. what if the devmap elem gets deleted.
> > >
> > > Jesper knows better than me. From my veiw, based on the description of
> > > __dev_flush():
> > >
> > > On devmap tear down we ensure the flush list is empty before completing to
> > > ensure all flush operations have completed. When drivers update the bpf
> > > program they may need to ensure any flush ops are also complete.
>
> AFAICT, the bq->xdp_prog is not from the dev. It is from a devmap's elem.
>
> >
> > Yeah, drivers call xdp_do_flush() before exiting their NAPI poll loop,
> > which also runs under one big rcu_read_lock(). So the storage in the
> > bulk queue is quite temporary, it's just used for bulking to increase
> > performance :)
>
> I am missing the one big rcu_read_lock() part. For example, in i40e_txrx.c,
> i40e_run_xdp() has its own rcu_read_lock/unlock(). dst->xdp_prog used to run
> in i40e_run_xdp() and it is fine.
>
> In this patch, dst->xdp_prog is run outside of i40e_run_xdp() where the
> rcu_read_unlock() has already done. It is now run in xdp_do_flush_map().
> or I missed the big rcu_read_lock() in i40e_napi_poll()?
>
> I do see the big rcu_read_lock() in mlx5e_napi_poll().
I believed/assumed xdp_do_flush_map() was already protected under an
rcu_read_lock. As the devmap and cpumap, which get called via
__dev_flush() and __cpu_map_flush(), have multiple RCU objects that we
are operating on.
Perhaps it is a bug in i40e?
We are running in softirq in NAPI context, when xdp_do_flush_map() is
call, which I think means that this CPU will not go-through a RCU grace
period before we exit softirq, so in-practice it should be safe. But
to be correct I do think we need a rcu_read_lock() around this call.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists