lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 26 Sep 2008 12:09:17 -0700
From:	Jay Vosburgh <>
To:	David Stevens <>
cc:	Brian Haley <>,
	Alex Sidorenko <>,
	Jeff Garzik <>,
	"" <>,,
	Vlad Yasevich <>
Subject: Re: [RFC] bonding: add better ipv6 failover support

David Stevens <> wrote:
>1) You're calling mld_send_report() directly, which will send the MLD
>        report synchronously. It should use the randomized timer (see 
>        A mass failover (e.g., a power event in a cluster) would blast all 
>of these at once,
>        which is why the randomized timer is required for gratuitous 
>reports. This
>        should use a randomized timer, like mld_ifc_start_timer(), but 
>joining the
>        group all by itself will do that.

	I need to do some more reading to have an informed response on
this one (not that I don't believe you; I'm just not familiar with the
MLD specs).

>2) There is already a configurable and code for unsolicited neighbor 
>        when adding an address-- why not use that? In fact, wouldn't just 
>moving the
>        failing device's address list to the new device do everything you 
>want, since
>        adding an address already sends unsolicited neighbor 
>        joins the solicited node address, etc.? Or am I missing something?

	Ooh, ooh, I can answer this one: The protocol addresses don't
move, they're attached to the bonding master.  The slaves have no
protocol level addresses of their own, so some kind of extra magic has
to take place.

>3) MLD has a lot of state and it's all associated with the device. 
>Changing the sending
>        device out from under it seems risky to me. I don't know enough 
>        bonding, but I think you really just want all the group 
>memberships and
>        MLD state to be with the master device and the master should just 
>        through the multicast list for the master and join those groups on 
>        new slave. The MLD code will already resolve the filters 
>        for joins and filters already done directly on the new slave that 

	This sound analagous to the IPv4 multicast address handling,
wherein the multicast address list is moved from one slave to another.
Is that a reasonable parallel?

>                Actually, I thought that's what Jay's prior patch was all 
>about, and
>        those joins should trigger MLD reports where needed, so I'm 
>        confused on what the problem with multicasts is beyond the 
>        addresses (which just needs to mimic the address add code, or use 
>        directly).

	I haven't posted any prior patch for this, so I'm not sure what
you're talking about here.


	-Jay Vosburgh, IBM Linux Technology Center,
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to
More majordomo info at

Powered by blists - more mailing lists