lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 5 May 2020 15:35:27 -0700
From:   Cong Wang <xiyou.wangcong@...il.com>
To:     Michal Kubecek <mkubecek@...e.cz>
Cc:     Linux Kernel Network Developers <netdev@...r.kernel.org>,
        syzbot <syzbot+e73ceacfd8560cc8a3ca@...kaller.appspotmail.com>,
        syzbot+c2fb6f9ddcea95ba49b5@...kaller.appspotmail.com,
        Jarod Wilson <jarod@...hat.com>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Jay Vosburgh <j.vosburgh@...il.com>,
        Jann Horn <jannh@...gle.com>
Subject: Re: [Patch net] net: fix a potential recursive NETDEV_FEAT_CHANGE

On Tue, May 5, 2020 at 3:27 PM Michal Kubecek <mkubecek@...e.cz> wrote:
>
> On Tue, May 05, 2020 at 02:58:19PM -0700, Cong Wang wrote:
> > syzbot managed to trigger a recursive NETDEV_FEAT_CHANGE event
> > between bonding master and slave. I managed to find a reproducer
> > for this:
> >
> >   ip li set bond0 up
> >   ifenslave bond0 eth0
> >   brctl addbr br0
> >   ethtool -K eth0 lro off
> >   brctl addif br0 bond0
> >   ip li set br0 up
> >
> > When a NETDEV_FEAT_CHANGE event is triggered on a bonding slave,
> > it captures this and calls bond_compute_features() to fixup its
> > master's and other slaves' features. However, when syncing with
> > its lower devices by netdev_sync_lower_features() this event is
> > triggered again on slaves, so it goes back and forth recursively
> > until the kernel stack is exhausted.
> >
> > It is unnecessary to trigger it for a second time, because when
> > we update the features from top down, we rely on each
> > dev->netdev_ops->ndo_fix_features() to do the job, each stacked
> > device should implement it. NETDEV_FEAT_CHANGE event is necessary
> > when we update from bottom up, like in existing stacked device
> > implementations.
> >
> > Just calling __netdev_update_features() is sufficient to fix this
> > issue.
> >
> > Fixes: fd867d51f889 ("net/core: generic support for disabling netdev features down stack")
> > Reported-by: syzbot+e73ceacfd8560cc8a3ca@...kaller.appspotmail.com
> > Reported-by: syzbot+c2fb6f9ddcea95ba49b5@...kaller.appspotmail.com
> > Cc: Jarod Wilson <jarod@...hat.com>
> > Cc: Josh Poimboeuf <jpoimboe@...hat.com>
> > Cc: Jay Vosburgh <j.vosburgh@...il.com>
> > Cc: Jann Horn <jannh@...gle.com>
> > Signed-off-by: Cong Wang <xiyou.wangcong@...il.com>
> > ---
> >  net/core/dev.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index 522288177bbd..ece50ae346c3 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -8907,7 +8907,7 @@ static void netdev_sync_lower_features(struct net_device *upper,
> >                       netdev_dbg(upper, "Disabling feature %pNF on lower dev %s.\n",
> >                                  &feature, lower->name);
> >                       lower->wanted_features &= ~feature;
> > -                     netdev_update_features(lower);
> > +                     __netdev_update_features(lower);
> >
> >                       if (unlikely(lower->features & feature))
> >                               netdev_WARN(upper, "failed to disable %pNF on %s!\n",
>
> Wouldn't this mean that when we disable LRO on a bond manually with
> "ethtool -K", LRO will be also disabled on its slaves but no netlink
> notification for them would be sent to userspace?

What netlink notification are you talking about?

When we change features from top down, ->ndo_fix_features()
does the work, in bonding case, it is bond_fix_features().
I see no netlink notification either in bond_compute_features()
or bond_fix_features().

Thanks.

Powered by blists - more mailing lists