[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1493214483.3041.108.camel@redhat.com>
Date: Wed, 26 Apr 2017 09:48:03 -0400
From: Doug Ledford <dledford@...hat.com>
To: Honggang LI <honli@...hat.com>
Cc: Paolo Abeni <pabeni@...hat.com>, Or Gerlitz <gerlitz.or@...il.com>,
Erez Shitrit <erezsh@....mellanox.co.il>,
Erez Shitrit <erezsh@...lanox.com>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
Linux Netdev List <netdev@...r.kernel.org>,
David Miller <davem@...emloft.net>
Subject: Re: [PATCH] IB/IPoIB: Check the headroom size
On Wed, 2017-04-26 at 21:33 +0800, Honggang LI wrote:
> Yes, it is during the process of removing the final slave. The
> reproducer looks like this:
>
> ping remote_ip_over_bonding_interface &
> while 1; do
> ifdown bond0
> ifup bond0
> done
Honestly, I would suspect the problem here is not when removing the
busy interface, but when bringing the interface back up. IIRC, the
bonding driver defaults to assuming it will be used on an Ethernet
interface. Once you attach an IB slave, it reconfigures itself for
running over IB instead. But once it's configured it should stay
configured until after the last IB slave is removed (and once that
slave it removed, the bond should no longer even possess the pointer to
the ipoib_hard_header routine, so we should never call it).
The process, in the bonding driver, for going down and coming back up
needs to look something like this:
ifdown bond0:
stop all queues
remove all slaves
possibly reconfigure to default state which is Ethernet suitable
ifup bond0:
add first slave
configure for IB instead of Ethernet
start queues
add additional slaves
I'm wondering if, when we have a current backlog, we aren't
accidentally doing this instead:
ifup bond0:
add first slave
release backlog queue
configure for IB instead of Ethernet
add additional slaves
Or, it might even be more subtle, such as:
ifup bond0:
add first slave
configure for IB instead of Ethernet
start queues
-> however, there was a backlog item on the queue from prior to
the first slave being added and the port configured for IB mode,
so the backlog skb is still configured for the default Ethernet
mode, hence the failure
add additional slaves
Maybe the real issue is that inside the bonding driver, when we
reconfigure the queue type from IB to Ethernet or Ethernet to IB, we
need to force either a drop or a reconfiguration of any packets already
currently in our waiting backlog queue of skbs. Paolo?
> >
> > but I don't think it can be after the final slave has been fully
> > removed from the bond. Paolo, what should the bond driver be doing
> > once the slaves are gone? Wouldn't it just be dropping every skb
> > on
> > the floor without calling anyone's hard header routine?
> >
> > >
> > > so it is better to immediately
> > > give up and return error.
> > >
> > > >
> > > >
> > > > + if (ret)
> > > > + return ret;
> > > >
> > > > header = (struct ipoib_header *) skb_push(skb, sizeof
> > > > *header);
> > > > ---
> > > >
> > > > Paolo
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe
> > > > linux-
> > > > rdma" in
> > > > the body of a message to majordomo@...r.kernel.org
> > > > More majordomo info at http://vger.kernel.org/majordomo-info.h
> > > > tml
> > --
> > Doug Ledford <dledford@...hat.com>
> > GPG KeyID: B826A3330E572FDD
> >
> > Key fingerprint = AE6B 1BDA 122B 23B4 265B 1274 B826 A333 0E57
> > 2FDD
> >
--
Doug Ledford <dledford@...hat.com>
GPG KeyID: B826A3330E572FDD
Key fingerprint = AE6B 1BDA 122B 23B4 265B 1274 B826 A333 0E57 2FDD
Powered by blists - more mailing lists