[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171006141140.0f252f73@redhat.com>
Date: Fri, 6 Oct 2017 14:11:40 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Daniel Borkmann <daniel@...earbox.net>
Cc: netdev@...r.kernel.org, jakub.kicinski@...ronome.com,
"Michael S. Tsirkin" <mst@...hat.com>, pavel.odintsov@...il.com,
Jason Wang <jasowang@...hat.com>, mchan@...adcom.com,
John Fastabend <john.fastabend@...il.com>,
peter.waskiewicz.jr@...el.com,
Daniel Borkmann <borkmann@...earbox.net>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Andy Gospodarek <andy@...yhouse.net>, brouer@...hat.com
Subject: Re: [net-next V4 PATCH 3/5] bpf: cpumap xdp_buff to skb conversion
and allocation
On Thu, 05 Oct 2017 12:22:43 +0200
Daniel Borkmann <daniel@...earbox.net> wrote:
> On 10/04/2017 02:03 PM, Jesper Dangaard Brouer wrote:
> [...]
> > static int cpu_map_kthread_run(void *data)
> > {
> > struct bpf_cpu_map_entry *rcpu = data;
> >
> > set_current_state(TASK_INTERRUPTIBLE);
> > while (!kthread_should_stop()) {
> > + unsigned int processed = 0, drops = 0;
> > struct xdp_pkt *xdp_pkt;
> >
> > - schedule();
> > - /* Do work */
> > - while ((xdp_pkt = ptr_ring_consume(rcpu->queue))) {
> > - /* For now just "refcnt-free" */
> > - page_frag_free(xdp_pkt);
> > + /* Release CPU reschedule checks */
> > + if (__ptr_ring_empty(rcpu->queue)) {
> > + schedule();
> > + } else {
> > + cond_resched();
> > + }
> > +
> > + /* Process packets in rcpu->queue */
> > + local_bh_disable();
> > + /*
> > + * The bpf_cpu_map_entry is single consumer, with this
> > + * kthread CPU pinned. Lockless access to ptr_ring
> > + * consume side valid as no-resize allowed of queue.
> > + */
> > + while ((xdp_pkt = __ptr_ring_consume(rcpu->queue))) {
> > + struct sk_buff *skb;
> > + int ret;
> > +
> > + skb = cpu_map_build_skb(rcpu, xdp_pkt);
> > + if (!skb) {
> > + page_frag_free(xdp_pkt);
> > + continue;
> > + }
> > +
> > + /* Inject into network stack */
> > + ret = netif_receive_skb_core(skb);
>
> Don't we need to hold RCU read lock for above netif_receive_skb_core()?
Yes, I guess we do, but I'll place it in netif_receive_skb_core() before
invoking __netif_receive_skb_core(), like netif_receive_skb() does
around __netif_receive_skb().
It looks like the RCU section protects:
rx_handler = rcu_dereference(skb->dev->rx_handler);
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists