[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <A1C63C57-98E9-4736-980E-83113E4545B4@cmss.chinamobile.com>
Date: Wed, 6 Sep 2017 14:39:02 +0800
From: 严海双 <yanhaishuang@...s.chinamobile.com>
To: Alexei Starovoitov <ast@...com>
Cc: "David S. Miller" <davem@...emloft.net>,
Pravin Shelar <pshelar@....org>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 2/2] ip6_tunnel: fix ip6 tunnel lookup in collect_md
mode
> On 2017年9月6日, at 上午11:14, Alexei Starovoitov <ast@...com> wrote:
>
> On 9/4/17 1:36 AM, Haishuang Yan wrote:
>> In collect_md mode, if the tun dev is down, it still can call
>> __ip6_tnl_rcv to receive on packets, and the rx statistics increase
>> improperly.
>>
>> Fixes: 8d79266bc48c ("ip6_tunnel: add collect_md mode to IPv6 tunnels")
>> Cc: Alexei Starovoitov <ast@...com>
>> Signed-off-by: Haishuang Yan <yanhaishuang@...s.chinamobile.com>
>>
>> ---
>> Change since v3:
>> * Increment rx_dropped if tunnel device is not up, suggested by
>> Pravin B Shelar
>> * Fix wrong recipient address
>> ---
>> net/ipv6/ip6_tunnel.c | 7 +++++--
>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
>> index 10a693a..e91d3b6 100644
>> --- a/net/ipv6/ip6_tunnel.c
>> +++ b/net/ipv6/ip6_tunnel.c
>> @@ -171,8 +171,11 @@ static struct net_device_stats *ip6_get_stats(struct net_device *dev)
>> }
>>
>> t = rcu_dereference(ip6n->collect_md_tun);
>> - if (t)
>> - return t;
>> + if (t) {
>> + if (t->dev->flags & IFF_UP)
>> + return t;
>> + t->dev->stats.rx_dropped++;
>> + }
>
> Why increment the stats only for this drop case?
Because It was suggested by Pravin on v2 commit of the patch.
> There are plenty of other conditions where packet
> will be dropped in ip6 tunnel. I think it's important
> to present consistent behavior to the users,
> so I'd increment drop stats either for all drop cases
> or for none. And today it's none.
> The ! IFF_UP case should probably be return NULL too
>
>
Powered by blists - more mailing lists