lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20160111.171619.1907777399182966008.davem@davemloft.net>
Date:	Mon, 11 Jan 2016 17:16:19 -0500 (EST)
From:	David Miller <davem@...emloft.net>
To:	jarod@...hat.com
Cc:	linux-kernel@...r.kernel.org, j.vosburgh@...il.com,
	vfalico@...il.com, gospo@...ulusnetworks.com,
	netdev@...r.kernel.org
Subject: Re: [PATCH net-next] bonding: make mii_status sysfs node consistent

From: Jarod Wilson <jarod@...hat.com>
Date: Fri,  8 Jan 2016 20:35:36 -0500

> The spew in /proc/net/bonding/bond0 uses netif_carrier_ok() to determine
> mii_status, while /sys/class/net/bond0/bonding/mii_status looks at
> curr_active_slave, which doesn't actually seem to be set sometimes when
> the bond actually is up. A mode 4 bond configured via ifcfg-foo files on a
> Red Hat Enterprise Linux system, after boot, comes up clean and
> functional, but the sysfs node shows mii_status of down, while proc shows
> up. A simple enough fix here seems to be to use the same method for
> determining up or down in both places, and I'd opt for the one that seems
> to match reality.
> 
> CC: Jay Vosburgh <j.vosburgh@...il.com>
> CC: Veaceslav Falico <vfalico@...il.com>
> CC: Andy Gospodarek <gospo@...ulusnetworks.com>
> CC: netdev@...r.kernel.org
> Signed-off-by: Jarod Wilson <jarod@...hat.com>

Seems reasonable, applied, thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ