[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181013094828.00979d39@redhat.com>
Date: Sat, 13 Oct 2018 09:48:28 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
Cc: "David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org,
brouer@...hat.com
Subject: Re: [PATCH net-next 1/3] veth: Account for packet drops in
ndo_xdp_xmit
On Thu, 11 Oct 2018 18:36:48 +0900
Toshiaki Makita <makita.toshiaki@....ntt.co.jp> wrote:
> Use existing atomic drop counter. Since drop path is really an
> exceptional case here, I'm thinking atomic ops would not hurt the
> performance.
Hmm... we try very hard not to add atomic ops to XDP code path. The
XDP_DROP case is also considered hot-path. In below code, the
atomic64_add happens for a bulk of dropped packets (currently up-to
16), so it might be okay.
> XDP packets and bytes are not counted in ndo_xdp_xmit, but will be
> accounted on rx side by the following commit.
>
> Signed-off-by: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
> ---
> drivers/net/veth.c | 30 ++++++++++++++++++++++--------
> 1 file changed, 22 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index 224c56a..452193f2 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -308,16 +308,20 @@ static int veth_xdp_xmit(struct net_device *dev, int n,
> {
> struct veth_priv *rcv_priv, *priv = netdev_priv(dev);
> struct net_device *rcv;
> + int i, ret, drops = n;
> unsigned int max_len;
> struct veth_rq *rq;
> - int i, drops = 0;
>
> - if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
> - return -EINVAL;
> + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) {
> + ret = -EINVAL;
> + goto drop;
> + }
>
> rcv = rcu_dereference(priv->peer);
> - if (unlikely(!rcv))
> - return -ENXIO;
> + if (unlikely(!rcv)) {
> + ret = -ENXIO;
> + goto drop;
> + }
>
> rcv_priv = netdev_priv(rcv);
> rq = &rcv_priv->rq[veth_select_rxq(rcv)];
> @@ -325,9 +329,12 @@ static int veth_xdp_xmit(struct net_device *dev, int n,
> * side. This means an XDP program is loaded on the peer and the peer
> * device is up.
> */
> - if (!rcu_access_pointer(rq->xdp_prog))
> - return -ENXIO;
> + if (!rcu_access_pointer(rq->xdp_prog)) {
> + ret = -ENXIO;
> + goto drop;
> + }
>
> + drops = 0;
> max_len = rcv->mtu + rcv->hard_header_len + VLAN_HLEN;
>
> spin_lock(&rq->xdp_ring.producer_lock);
> @@ -346,7 +353,14 @@ static int veth_xdp_xmit(struct net_device *dev, int n,
> if (flags & XDP_XMIT_FLUSH)
> __veth_xdp_flush(rq);
>
> - return n - drops;
> + if (likely(!drops))
> + return n;
> +
> + ret = n - drops;
> +drop:
> + atomic64_add(drops, &priv->dropped);
> +
> + return ret;
> }
>
> static void veth_xdp_flush(struct net_device *dev)
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists