[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1270026517.2103.9.camel@edumazet-laptop>
Date: Wed, 31 Mar 2010 11:08:37 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Andy Gospodarek <andy@...yhouse.net>,
David Miller <davem@...emloft.net>
Cc: netdev@...r.kernel.org, lhh@...hat.com, fubar@...ibm.com,
bonding-devel@...ts.sourceforge.net
Subject: Re: [net-2.6 PATCH] bonding: fix broken multicast with round-robin
mode
Le jeudi 25 mars 2010 à 17:40 -0400, Andy Gospodarek a écrit :
> Round-robin (mode 0) does nothing to ensure that any multicast traffic
> originally destined for the host will continue to arrive at the host when
> the link that sent the IGMP join or membership report goes down. One of
> the benefits of absolute round-robin transmit.
>
> Keeping track of subscribed multicast groups for each slave did not seem
> like a good use of resources, so I decided to simply send on the
> curr_active slave of the bond (typically the first enslaved device that
> is up). This makes failover management simple as IGMP membership
> reports only need to be sent when the curr_active_slave changes. I
> tested this patch and it appears to work as expected.
>
> Originally reported by Lon Hohberger <lhh@...hat.com>.
>
> Signed-off-by: Andy Gospodarek <andy@...yhouse.net>
> CC: Lon Hohberger <lhh@...hat.com>
> CC: Jay Vosburgh <fubar@...ibm.com>
>
> ---
> drivers/net/bonding/bond_main.c | 34 ++++++++++++++++++++++++++--------
> 1 files changed, 26 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
> index 430c022..0b38455 100644
> --- a/drivers/net/bonding/bond_main.c
> +++ b/drivers/net/bonding/bond_main.c
> @@ -1235,6 +1235,11 @@ void bond_change_active_slave(struct bonding *bond, struct slave *new_active)
> write_lock_bh(&bond->curr_slave_lock);
> }
> }
> +
> + /* resend IGMP joins since all were sent on curr_active_slave */
> + if (bond->params.mode == BOND_MODE_ROUNDROBIN) {
> + bond_resend_igmp_join_requests(bond);
> + }
> }
>
> /**
> @@ -4138,22 +4143,35 @@ static int bond_xmit_roundrobin(struct sk_buff *skb, struct net_device *bond_dev
> struct bonding *bond = netdev_priv(bond_dev);
> struct slave *slave, *start_at;
> int i, slave_no, res = 1;
> + struct iphdr *iph = ip_hdr(skb);
>
> read_lock(&bond->lock);
>
> if (!BOND_IS_OK(bond))
> goto out;
> -
> /*
> - * Concurrent TX may collide on rr_tx_counter; we accept that
> - * as being rare enough not to justify using an atomic op here
> + * Start with the curr_active_slave that joined the bond as the
> + * default for sending IGMP traffic. For failover purposes one
> + * needs to maintain some consistency for the interface that will
> + * send the join/membership reports. The curr_active_slave found
> + * will send all of this type of traffic.
> */
> - slave_no = bond->rr_tx_counter++ % bond->slave_cnt;
> + if ((skb->protocol == htons(ETH_P_IP)) &&
> + (iph->protocol == htons(IPPROTO_IGMP))) {
Hmm...
iph->protocol is a u8, how can htons(IPPROTO_IGMP) be equal to
iph->protocol ?
[PATCH] bonding: bond_xmit_roundrobin() fix
Commit a2fd940f (bonding: fix broken multicast with round-robin mode)
added a problem on litle endian machines.
drivers/net/bonding/bond_main.c:4159: warning: comparison is always
false due to limited range of data type
Signed-off-by: Eric Dumazet <eric.dumazet@...il.com>
---
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 5b92fbf..5972a52 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -4156,7 +4156,7 @@ static int bond_xmit_roundrobin(struct sk_buff *skb, struct net_device *bond_dev
* send the join/membership reports. The curr_active_slave found
* will send all of this type of traffic.
*/
- if ((iph->protocol == htons(IPPROTO_IGMP)) &&
+ if ((iph->protocol == IPPROTO_IGMP) &&
(skb->protocol == htons(ETH_P_IP))) {
read_lock(&bond->curr_slave_lock);
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists