[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87o8efkilw.fsf@toke.dk>
Date: Thu, 15 Apr 2021 11:22:19 +0200
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: Hangbin Liu <liuhangbin@...il.com>, Martin KaFai Lau <kafai@...com>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org,
Jiri Benc <jbenc@...hat.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Eelco Chaudron <echaudro@...hat.com>, ast@...nel.org,
Daniel Borkmann <daniel@...earbox.net>,
Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
David Ahern <dsahern@...il.com>,
Andrii Nakryiko <andrii.nakryiko@...il.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
John Fastabend <john.fastabend@...il.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Björn Töpel
<bjorn.topel@...il.com>
Subject: Re: [PATCHv7 bpf-next 1/4] bpf: run devmap xdp_prog on flush
instead of bulk enqueue
Hangbin Liu <liuhangbin@...il.com> writes:
> On Wed, Apr 14, 2021 at 05:17:11PM -0700, Martin KaFai Lau wrote:
>> > static void bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags)
>> > {
>> > struct net_device *dev = bq->dev;
>> > - int sent = 0, err = 0;
>> > + int sent = 0, drops = 0, err = 0;
>> > + unsigned int cnt = bq->count;
>> > + int to_send = cnt;
>> > int i;
>> >
>> > - if (unlikely(!bq->count))
>> > + if (unlikely(!cnt))
>> > return;
>> >
>> > - for (i = 0; i < bq->count; i++) {
>> > + for (i = 0; i < cnt; i++) {
>> > struct xdp_frame *xdpf = bq->q[i];
>> >
>> > prefetch(xdpf);
>> > }
>> >
>> > - sent = dev->netdev_ops->ndo_xdp_xmit(dev, bq->count, bq->q, flags);
>> > + if (bq->xdp_prog) {
>> bq->xdp_prog is used here
>>
>> > + to_send = dev_map_bpf_prog_run(bq->xdp_prog, bq->q, cnt, dev);
>> > + if (!to_send)
>> > + goto out;
>> > +
>> > + drops = cnt - to_send;
>> > + }
>> > +
>>
>> [ ... ]
>>
>> > static void bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf,
>> > - struct net_device *dev_rx)
>> > + struct net_device *dev_rx, struct bpf_prog *xdp_prog)
>> > {
>> > struct list_head *flush_list = this_cpu_ptr(&dev_flush_list);
>> > struct xdp_dev_bulk_queue *bq = this_cpu_ptr(dev->xdp_bulkq);
>> > @@ -412,18 +466,22 @@ static void bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf,
>> > /* Ingress dev_rx will be the same for all xdp_frame's in
>> > * bulk_queue, because bq stored per-CPU and must be flushed
>> > * from net_device drivers NAPI func end.
>> > + *
>> > + * Do the same with xdp_prog and flush_list since these fields
>> > + * are only ever modified together.
>> > */
>> > - if (!bq->dev_rx)
>> > + if (!bq->dev_rx) {
>> > bq->dev_rx = dev_rx;
>> > + bq->xdp_prog = xdp_prog;
>> bp->xdp_prog is assigned here and could be used later in bq_xmit_all().
>> How is bq->xdp_prog protected? Are they all under one rcu_read_lock()?
>> It is not very obvious after taking a quick look at xdp_do_flush[_map].
>>
>> e.g. what if the devmap elem gets deleted.
>
> Jesper knows better than me. From my veiw, based on the description of
> __dev_flush():
>
> On devmap tear down we ensure the flush list is empty before completing to
> ensure all flush operations have completed. When drivers update the bpf
> program they may need to ensure any flush ops are also complete.
Yeah, drivers call xdp_do_flush() before exiting their NAPI poll loop,
which also runs under one big rcu_read_lock(). So the storage in the
bulk queue is quite temporary, it's just used for bulking to increase
performance :)
-Toke
Powered by blists - more mailing lists