lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <C5551D9AAB213A418B7FD5E4A6F30A07012CCF52@ORSMSX106.amr.corp.intel.com>
Date:	Mon, 30 Jan 2012 16:50:58 +0000
From:	"Rose, Gregory V" <gregory.v.rose@...el.com>
To:	David Miller <davem@...emloft.net>,
	"david.vrabel@...rix.com" <david.vrabel@...rix.com>
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"steweg@...il.com" <steweg@...il.com>
Subject: RE: Regression: "rtnetlink: Compute and store minimum ifinfo dump
 size" breaks glibc's getifaddrs()

> -----Original Message-----
> From: David Miller [mailto:davem@...emloft.net]
> Sent: Friday, January 27, 2012 1:54 PM
> To: david.vrabel@...rix.com
> Cc: netdev@...r.kernel.org; Rose, Gregory V; steweg@...il.com
> Subject: Re: Regression: "rtnetlink: Compute and store minimum ifinfo dump
> size" breaks glibc's getifaddrs()
> 
> From: David Vrabel <david.vrabel@...rix.com>
> Date: Fri, 27 Jan 2012 12:36:47 +0000
> 
> > Changeset c7ac8679bec9397afe8918f788cbcef88c38da54 (rtnetlink: Compute
> > and store minimum ifinfo dump size) applied to 3.1 increased the maximum
> > size of the RTM_GETLINK message response.
> >
> > glibc's getifaddrs() function uses a page sized (4 KiB) buffer for the
> > RTM_GETLINK response and returns a failure if the message is truncated.
> > This buffer is not large enough if there is a network card with many
> > virtual functions.
> >
> > What do you recommend to resolve this regression?
> 
> Actually, glibc technically uses the CPP define value "PAGE_SIZE" if
> available, which is potentially different from the system page size.
> 
> Using a statically defined PAGE_SIZE is wrong if the program is
> subsequently executed on a system with a different page size.
> 
> On sparc, and powerpc I believe, this happens commonly.  A 32-bit
> executable will see a PAGE_SIZE value of 4K, but when executed on a
> 64-bit system the page size is actually 8K or larger.  That's why
> __getpagesize() should always be used.
> 
> Anyways, if the page sizes are correct we're in a bit of a pickle.
> 
> Do you have any idea what the computed value of min_ifinfo_dump_size
> is at the time of the failure?
> 
> Greg, I think we're kind of screwed.  The defined minimum appropriate
> buffer size for a recvmsg() call on a netlink socket is defined as:
> 
> 	getpagesize() < 8192 ? getpagesize() : 8192
> 
> and glibc essentially abides by this by unconditionally using page
> size, and therefore if an interface with many virtual interfaces takes
> us over this limit, we break basically every properly written piece of
> netlink code out there.

Yep, we're screwed.  And as more features and capabilities get added to virtual functions that we want to control through netlink it gets even worse.

Maybe we should have 'ip link show' just display the number of VFs and then have a new 'ip' tool syntax along the lines of 'ip link show eth(x) vf (n)' where eth(x) is the PF and n is the number of the VF.  Then it could show all relevant information for just that VF.  Scripts could parse the number of VFs from the first call to 'ip link show' and then loop to show the details of each VF.

Just an idea... maybe there are other ones out there but it's just getting ridiculous how much data has to be transferred back and forth during the basic 'ip link show' command when the interface as subordinate VFs now that we're getting to devices with up to 256 VFs.

- Greg

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ