lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <1ef142e0-43dc-8f18-e67b-2020af37fd17@intel.com> Date: Fri, 31 Mar 2023 17:43:09 +0200 From: Alexander Lobakin <aleksander.lobakin@...el.com> To: William Tu <u9012063@...il.com> CC: <netdev@...r.kernel.org>, <jsankararama@...are.com>, <gyang@...are.com>, <doshir@...are.com>, <alexander.duyck@...il.com>, <alexandr.lobakin@...el.com>, <bang@...are.com>, <maciej.fijalkowski@...el.com>, <witu@...dia.com>, <horatiu.vultur@...rochip.com>, <error27@...il.com>, Alexander Duyck <alexanderduyck@...com> Subject: Re: [PATCH RFC net-next v19] vmxnet3: Add XDP support. From: William Tu <u9012063@...il.com> Date: Sat, 25 Mar 2023 10:28:28 -0700 Sorry for the late reply, I've been busy :s > From: William Tu <u9012063@...il.com> > > The patch adds native-mode XDP support: XDP DROP, PASS, TX, and REDIRECT. > > Background: [...] > +static int > +vmxnet3_run_xdp(struct vmxnet3_rx_queue *rq, struct xdp_buff *xdp, > + struct bpf_prog *prog) > +{ > + struct xdp_frame *xdpf; > + struct page *page; > + int err; > + u32 act; > + > + act = bpf_prog_run_xdp(prog, xdp); > + rq->stats.xdp_packets++; I think you can increment it *before* running the program, so that there'll be as tiny time gap as possible. > + page = virt_to_page(xdp->data_hard_start); You don't need it for PASS and REDIRECT. > + > + switch (act) { > + case XDP_PASS: > + return act; > + case XDP_REDIRECT: > + err = xdp_do_redirect(rq->adapter->netdev, xdp, prog); > + if (!err) > + rq->stats.xdp_redirects++; > + else > + rq->stats.xdp_drops++; BTW, if you get @err here, shouldn't you recycle the page, just like in TX case? > + return act; > + case XDP_TX: > + xdpf = xdp_convert_buff_to_frame(xdp); > + if (unlikely(!xdpf || > + vmxnet3_xdp_xmit_back(rq->adapter, xdpf))) { > + rq->stats.xdp_drops++; > + page_pool_recycle_direct(rq->page_pool, page); > + } else { > + rq->stats.xdp_tx++; > + } > + return act; > + default: > + bpf_warn_invalid_xdp_action(rq->adapter->netdev, prog, act); > + fallthrough; > + case XDP_ABORTED: > + trace_xdp_exception(rq->adapter->netdev, prog, act); > + rq->stats.xdp_aborted++; > + break; > + case XDP_DROP: > + rq->stats.xdp_drops++; > + break; > + } > + > + page_pool_recycle_direct(rq->page_pool, page); > + > + return act; > +} [...] > + xdp_init_buff(&xdp, PAGE_SIZE, &rq->xdp_rxq); > + xdp_prepare_buff(&xdp, page_address(page), rq->page_pool->p.offset, > + len, false); > + xdp_buff_clear_frags_flag(&xdp); > + > + /* Must copy the data because it's at dataring. */ > + memcpy(xdp.data, data, len); > + > + rcu_read_lock(); Where's the corresponding unlock? > + xdp_prog = rcu_dereference(rq->adapter->xdp_bpf_prog); > + if (!xdp_prog) { > + act = XDP_PASS; > + goto out_skb; > + } > + act = vmxnet3_run_xdp(rq, &xdp, xdp_prog); > + > + if (act == XDP_PASS) { > +out_skb: > + *skb_xdp_pass = vmxnet3_build_skb(rq, page, &xdp); > + if (!*skb_xdp_pass) > + return XDP_DROP; > + } > + > + /* No need to refill. */ > + return act; Maybe act = vmxnet3_run_xdp(rq, &xdp, xdp_prog); if (act != XDP_PASS) return act; out_skb: *skb_xdp_pass = vmxnet3_build_skb(rq, page, &xdp); return likely(*skb_xdp_pass) ? act : XDP_DROP; ? > +} > + > +int > +vmxnet3_process_xdp(struct vmxnet3_adapter *adapter, > + struct vmxnet3_rx_queue *rq, > + struct Vmxnet3_RxCompDesc *rcd, > + struct vmxnet3_rx_buf_info *rbi, > + struct Vmxnet3_RxDesc *rxd, > + struct sk_buff **skb_xdp_pass) [...] Thanks, Olek
Powered by blists - more mailing lists