[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3c396829-61b4-f4ca-6b30-1ac8ff99c7b4@lab.ntt.co.jp>
Date: Tue, 11 Sep 2018 20:07:20 +0900
From: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
To: Eric Dumazet <eric.dumazet@...il.com>,
Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org
Subject: Re: unexpected GRO/veth behavior
On 2018/09/11 19:27, Eric Dumazet wrote:
...
> Fix would probably be :
>
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index 8d679c8b7f25c753d77cfb8821d9d2528c9c9048..96bd94480942b469403abf017f9f9d5be1e23ef5 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -602,9 +602,10 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget, unsigned int *xdp_xmit)
> skb = veth_xdp_rcv_skb(rq, ptr, xdp_xmit);
> }
>
> - if (skb)
> + if (skb) {
> + skb_orphan(skb);
> napi_gro_receive(&rq->xdp_napi, skb);
> -
> + }
> done++;
> }
Considering commit 9c4c3252 ("skbuff: preserve sock reference when
scrubbing the skb.") I'm not sure if we should unconditionally orphan
the skb here.
I was thinking I should call netif_receive_skb() for such packets
instead of napi_gro_receive().
--
Toshiaki Makita
Powered by blists - more mailing lists