[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200320133737.GA2329672@lore-desk-wlan>
Date: Fri, 20 Mar 2020 14:37:37 +0100
From: Lorenzo Bianconi <lorenzo@...nel.org>
To: Toshiaki Makita <toshiaki.makita1@...il.com>
Cc: netdev@...r.kernel.org, davem@...emloft.net, brouer@...hat.com,
dsahern@...il.com, lorenzo.bianconi@...hat.com, toke@...hat.com
Subject: Re: [PATCH net-next 4/5] veth: introduce more xdp counters
> On 2020/03/20 1:41, Lorenzo Bianconi wrote:
> > Introduce xdp_xmit counter in order to distinguish between XDP_TX and
> > ndo_xdp_xmit stats. Introduce the following ethtool counters:
> > - rx_xdp_tx
> > - rx_xdp_tx_errors
> > - tx_xdp_xmit
> > - tx_xdp_xmit_errors
> > - rx_xdp_redirect
>
> Thank you for working on this!
>
> > Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> > ---
> ...
> > @@ -395,7 +404,8 @@ static int veth_xdp_xmit(struct net_device *dev, int n,
> > }
> > rcv_priv = netdev_priv(rcv);
> > - rq = &rcv_priv->rq[veth_select_rxq(rcv)];
> > + qidx = veth_select_rxq(rcv);
> > + rq = &rcv_priv->rq[qidx];
> > /* Non-NULL xdp_prog ensures that xdp_ring is initialized on receive
> > * side. This means an XDP program is loaded on the peer and the peer
> > * device is up.
> > @@ -424,6 +434,17 @@ static int veth_xdp_xmit(struct net_device *dev, int n,
> > if (flags & XDP_XMIT_FLUSH)
> > __veth_xdp_flush(rq);
> > + rq = &priv->rq[qidx];
>
> I think there is no guarantee that this rq exists. Qidx is less than
> rcv->real_num_rx_queues, but not necessarily less than
> dev->real_num_rx_queues.
>
> > + u64_stats_update_begin(&rq->stats.syncp);
>
> So this can cuase NULL pointer dereference.
oh right, thanks for spotting this.
I think we can recompute qidx for tx netdevice in this case, doing something
like:
qidx = veth_select_rxq(dev);
rq = &priv->rq[qidx];
what do you think?
Regards,
Lorenzo
>
> Toshiaki Makita
>
> > + if (ndo_xmit) {
> > + rq->stats.vs.xdp_xmit += n - drops;
> > + rq->stats.vs.xdp_xmit_err += drops;
> > + } else {
> > + rq->stats.vs.xdp_tx += n - drops;
> > + rq->stats.vs.xdp_tx_err += drops;
> > + }
> > + u64_stats_update_end(&rq->stats.syncp);
> > +
> > if (likely(!drops)) {
> > rcu_read_unlock();
> > return n;
> > @@ -437,11 +458,17 @@ static int veth_xdp_xmit(struct net_device *dev, int n,
> > return ret;
> > }
> > +static int veth_ndo_xdp_xmit(struct net_device *dev, int n,
> > + struct xdp_frame **frames, u32 flags)
> > +{
> > + return veth_xdp_xmit(dev, n, frames, flags, true);
> > +}
> > +
> > static void veth_xdp_flush_bq(struct net_device *dev, struct veth_xdp_tx_bq *bq)
> > {
> > int sent, i, err = 0;
> > - sent = veth_xdp_xmit(dev, bq->count, bq->q, 0);
> > + sent = veth_xdp_xmit(dev, bq->count, bq->q, 0, false);
> > if (sent < 0) {
> > err = sent;
> > sent = 0;
> > @@ -753,6 +780,7 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget,
> > }
> > u64_stats_update_begin(&rq->stats.syncp);
> > + rq->stats.vs.xdp_redirect += stats->xdp_redirect;
> > rq->stats.vs.xdp_bytes += stats->xdp_bytes;
> > rq->stats.vs.xdp_drops += stats->xdp_drops;
> > rq->stats.vs.rx_drops += stats->rx_drops;
> > @@ -1172,7 +1200,7 @@ static const struct net_device_ops veth_netdev_ops = {
> > .ndo_features_check = passthru_features_check,
> > .ndo_set_rx_headroom = veth_set_rx_headroom,
> > .ndo_bpf = veth_xdp,
> > - .ndo_xdp_xmit = veth_xdp_xmit,
> > + .ndo_xdp_xmit = veth_ndo_xdp_xmit,
> > };
> > #define VETH_FEATURES (NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_HW_CSUM | \
> >
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists