[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1289863543.3364.24.camel@edumazet-laptop>
Date: Tue, 16 Nov 2010 00:25:43 +0100
From: Eric Dumazet <eric.dumazet@...il.com>
To: John Fastabend <john.r.fastabend@...el.com>
Cc: Stephen Hemminger <shemminger@...tta.com>, lorenzo@...gle.com,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
brian.haley@...com, maze@...gle.com
Subject: Re: network device reference leak with net-next
Le lundi 15 novembre 2010 à 15:18 -0800, John Fastabend a écrit :
> On 11/15/2010 11:10 AM, Stephen Hemminger wrote:
> > On Mon, 15 Nov 2010 20:04:00 +0100
> > Eric Dumazet <eric.dumazet@...il.com> wrote:
> >
> >> Le lundi 15 novembre 2010 à 10:56 -0800, Stephen Hemminger a écrit :
> >>> This is a new regression (doesn't exist with 2.6.36)
> >>>
> >>> If I shutdown KVM instance with Virt manager, the virtual
> >>> interfaces in the bridge aren't getting cleaned up because
> >>> of leftover reference count.
> >>>
> >>>
> >>> [ 9781.050474] unregister_netdevice: waiting for vnet1 to become free. Usage count = 1
> >>> [ 9785.143400] virbr4: port 3(vnet6) entering forwarding state
> >>> [ 9785.177194] virbr4: port 3(vnet6) entering disabled state
> >>> [ 9785.201129] device vnet6 left promiscuous mode
> >>> [ 9785.201135] virbr4: port 3(vnet6) entering disabled state
> >>> [ 9791.286950] unregister_netdevice: waiting for vnet1 to become free. Usage count = 1
> >>> [ 9795.461526] unregister_netdevice: waiting for vnet6 to become free. Usage count = 1
> >>> [ 9801.523398] unregister_netdevice: waiting for vnet1 to become free. Usage count = 1
> >>> --
> >>
> >> Is the refcount stay forever to 1, or eventually reaches 0 ?
> >>
> >>
> >>
> >
> > Stays 1 for as long as I waited about 10 minutes
> >
>
> Similar refcount error with ixgbe bisected to this patch,
>
> commit 2de795707294972f6c34bae9de713e502c431296
> Author: Lorenzo Colitti <lorenzo@...gle.com>
> Date: Wed Oct 27 18:16:49 2010 +0000
>
> ipv6: addrconf: don't remove address state on ifdown if the address is being kept
>
> Currently, addrconf_ifdown does not delete statically configured IPv6
> addresses when the interface is brought down. The intent is that when
> the interface comes back up the address will be usable again. However,
> this doesn't actually work, because the system stops listening on the
> corresponding solicited-node multicast address, so the address cannot
> respond to neighbor solicitations and thus receive traffic. Also, the
> code notifies the rest of the system that the address is being deleted
> (e.g, RTM_DELADDR), even though it is not. Fix it so that none of this
> state is updated if the address is being kept on the interface.
>
> Tested: Added a statically configured IPv6 address to an interface,
> started ping, brought link down, brought link up again. When link came
> up ping kept on going and "ip -6 maddr" showed that the host was still
> subscribed to there
>
> Signed-off-by: Lorenzo Colitti <lorenzo@...gle.com>
> Signed-off-by: David S. Miller <davem@...emloft.net>
>
>
> Quick glance looks like an in6_ifa_put is missed?
>
> --- a/net/ipv6/addrconf.c
> +++ b/net/ipv6/addrconf.c
> @@ -2754,13 +2754,13 @@ Hunk #1, a/net/ipv6/addrconf.c static int addrconf_ifdown(struct net_device *dev, int how)
> ifa->state = INET6_IFADDR_STATE_DEAD;
> spin_unlock_bh(&ifa->state_lock);
>
> - if (state == INET6_IFADDR_STATE_DEAD) {
> - in6_ifa_put(ifa);
> - } else {
> + if (state != INET6_IFADDR_STATE_DEAD) {
> __ipv6_ifa_notify(RTM_DELADDR, ifa);
> atomic_notifier_call_chain(&inet6addr_chain,
> NETDEV_DOWN, ifa);
> }
> +
> + in6_ifa_put(ifa);
> write_lock_bh(&idev->lock);
> }
> }
>
> With a quick hack the above seems to resolve the issue but I'll need to review the ipv6 stuff to be sure this is sane.
>
> Thanks,
> John.
Your machine is faster than mine ;)
Yes this commit seems buggy, thanks for finding the problem !
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists