lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 15 Oct 2019 17:11:52 -0700 From: Jakub Kicinski <jakub.kicinski@...ronome.com> To: Lorenzo Bianconi <lorenzo@...nel.org> Cc: netdev@...r.kernel.org, lorenzo.bianconi@...hat.com, davem@...emloft.net, thomas.petazzoni@...tlin.com, brouer@...hat.com, ilias.apalodimas@...aro.org, matteo.croce@...hat.com, mw@...ihalf.com Subject: Re: [PATCH v3 net-next 8/8] net: mvneta: add XDP_TX support On Mon, 14 Oct 2019 12:49:55 +0200, Lorenzo Bianconi wrote: > Implement XDP_TX verdict and ndo_xdp_xmit net_device_ops function > pointer > > Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org> > @@ -1972,6 +1975,109 @@ int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq) > return i; > } > > +static int > +mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq, > + struct xdp_frame *xdpf, bool dma_map) > +{ > + struct mvneta_tx_desc *tx_desc; > + struct mvneta_tx_buf *buf; > + dma_addr_t dma_addr; > + > + if (txq->count >= txq->tx_stop_threshold) > + return MVNETA_XDP_CONSUMED; > + > + tx_desc = mvneta_txq_next_desc_get(txq); > + > + buf = &txq->buf[txq->txq_put_index]; > + if (dma_map) { > + /* ndo_xdp_xmit */ > + dma_addr = dma_map_single(pp->dev->dev.parent, xdpf->data, > + xdpf->len, DMA_TO_DEVICE); > + if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) { > + mvneta_txq_desc_put(txq); > + return MVNETA_XDP_CONSUMED; > + } > + buf->type = MVNETA_TYPE_XDP_NDO; > + } else { > + struct page *page = virt_to_page(xdpf->data); > + > + dma_addr = page_pool_get_dma_addr(page) + > + pp->rx_offset_correction + MVNETA_MH_SIZE; > + dma_sync_single_for_device(pp->dev->dev.parent, dma_addr, > + xdpf->len, DMA_BIDIRECTIONAL); This looks a little suspicious, XDP could have moved the start of frame with adjust_head, right? You should also use xdpf->data to find where the frame starts, no? > + buf->type = MVNETA_TYPE_XDP_TX; > + } > + buf->xdpf = xdpf; > + > + tx_desc->command = MVNETA_TXD_FLZ_DESC; > + tx_desc->buf_phys_addr = dma_addr; > + tx_desc->data_size = xdpf->len; > + > + mvneta_update_stats(pp, 1, xdpf->len, true); > + mvneta_txq_inc_put(txq); > + txq->pending++; > + txq->count++; > + > + return MVNETA_XDP_TX; > +} > + > +static int > +mvneta_xdp_xmit_back(struct mvneta_port *pp, struct xdp_buff *xdp) > +{ > + struct xdp_frame *xdpf = convert_to_xdp_frame(xdp); > + int cpu = smp_processor_id(); > + struct mvneta_tx_queue *txq; > + struct netdev_queue *nq; > + u32 ret; > + > + if (unlikely(!xdpf)) > + return MVNETA_XDP_CONSUMED; Personally I'd prefer you haven't called a function which return code has to be error checked in local variable init. > + > + txq = &pp->txqs[cpu % txq_number]; > + nq = netdev_get_tx_queue(pp->dev, txq->id); > + > + __netif_tx_lock(nq, cpu); > + ret = mvneta_xdp_submit_frame(pp, txq, xdpf, false); > + if (ret == MVNETA_XDP_TX) > + mvneta_txq_pend_desc_add(pp, txq, 0); > + __netif_tx_unlock(nq); > + > + return ret; > +}
Powered by blists - more mailing lists