[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <71c6d8f2-42b3-b6b9-8fbb-13e8e18a8217@gmail.com>
Date: Sat, 13 Oct 2018 17:57:34 +0900
From: Toshiaki Makita <toshiaki.makita1@...il.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>,
"David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [PATCH net-next 1/3] veth: Account for packet drops in
ndo_xdp_xmit
On 18/10/13 (土) 16:48, Jesper Dangaard Brouer wrote:
> On Thu, 11 Oct 2018 18:36:48 +0900
> Toshiaki Makita <makita.toshiaki@....ntt.co.jp> wrote:
>
>> Use existing atomic drop counter. Since drop path is really an
>> exceptional case here, I'm thinking atomic ops would not hurt the
>> performance.
>
> Hmm... we try very hard not to add atomic ops to XDP code path. The
> XDP_DROP case is also considered hot-path. In below code, the
> atomic64_add happens for a bulk of dropped packets (currently up-to
> 16), so it might be okay.
Yes, this happens only once in a bulk sending.
Note that this drop does not include XDP_DROP. This drop is counted when
- ndo_xdp_xmit "flags" arg is invalid
- peer is detached
- XDP is not loaded on peer
- XDP ring (256 slots) overflow
So really exceptional. XDP_DROP is counted per-queue basis (non-atomic)
in the patch 2/3.
Toshiaki Makita
>
>> XDP packets and bytes are not counted in ndo_xdp_xmit, but will be
>> accounted on rx side by the following commit.
>>
>> Signed-off-by: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
>> ---
>> drivers/net/veth.c | 30 ++++++++++++++++++++++--------
>> 1 file changed, 22 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
>> index 224c56a..452193f2 100644
>> --- a/drivers/net/veth.c
>> +++ b/drivers/net/veth.c
>> @@ -308,16 +308,20 @@ static int veth_xdp_xmit(struct net_device *dev, int n,
>> {
>> struct veth_priv *rcv_priv, *priv = netdev_priv(dev);
>> struct net_device *rcv;
>> + int i, ret, drops = n;
>> unsigned int max_len;
>> struct veth_rq *rq;
>> - int i, drops = 0;
>>
>> - if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
>> - return -EINVAL;
>> + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) {
>> + ret = -EINVAL;
>> + goto drop;
>> + }
>>
>> rcv = rcu_dereference(priv->peer);
>> - if (unlikely(!rcv))
>> - return -ENXIO;
>> + if (unlikely(!rcv)) {
>> + ret = -ENXIO;
>> + goto drop;
>> + }
>>
>> rcv_priv = netdev_priv(rcv);
>> rq = &rcv_priv->rq[veth_select_rxq(rcv)];
>> @@ -325,9 +329,12 @@ static int veth_xdp_xmit(struct net_device *dev, int n,
>> * side. This means an XDP program is loaded on the peer and the peer
>> * device is up.
>> */
>> - if (!rcu_access_pointer(rq->xdp_prog))
>> - return -ENXIO;
>> + if (!rcu_access_pointer(rq->xdp_prog)) {
>> + ret = -ENXIO;
>> + goto drop;
>> + }
>>
>> + drops = 0;
>> max_len = rcv->mtu + rcv->hard_header_len + VLAN_HLEN;
>>
>> spin_lock(&rq->xdp_ring.producer_lock);
>> @@ -346,7 +353,14 @@ static int veth_xdp_xmit(struct net_device *dev, int n,
>> if (flags & XDP_XMIT_FLUSH)
>> __veth_xdp_flush(rq);
>>
>> - return n - drops;
>> + if (likely(!drops))
>> + return n;
>> +
>> + ret = n - drops;
>> +drop:
>> + atomic64_add(drops, &priv->dropped);
>> +
>> + return ret;
>> }
>>
>> static void veth_xdp_flush(struct net_device *dev)
Powered by blists - more mailing lists