lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 30 Dec 2015 21:02:12 -0500
From:	Doug Ledford <dledford@...hat.com>
To:	Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:	Daniel Borkmann <daniel@...earbox.net>,
	David Miller <davem@...emloft.net>,
	netdev <netdev@...r.kernel.org>
Subject: Re: 4.4-rc7 failure report

On 12/29/2015 11:16 PM, Alexei Starovoitov wrote:
> On Tue, Dec 29, 2015 at 10:44:31PM -0500, Doug Ledford wrote:
>> On 12/29/2015 10:43 PM, Alexei Starovoitov wrote:
>>> On Mon, Dec 28, 2015 at 08:26:44PM -0500, Doug Ledford wrote:
>>>> On 12/28/2015 05:20 PM, Daniel Borkmann wrote:
>>>>> On 12/28/2015 10:53 PM, Doug Ledford wrote:
>>>>>> The 4.4-rc7 kernel is failing for me.  In my case, all of my vlan
>>>>>> interfaces are failing to obtain a dhcp address using dhclient.  I've
>>>>>> tried a hand built 4.4-rc7, and the Fedora rawhide 4.4-rc7 kernel, both
>>>>>> failed.  I've tried NetworkManager and the old SysV network service,
>>>>>> both fail.  I tried a working dhclient from rhel7 on the Fedora rawhide
>>>>>> install and it failed too.  Running tcpdump on the interface shows the
>>>>>> dhcp request going out, and a dhcp response coming back in.  Running
>>>>>> strace on dhclient shows that it writes the dhcp request, but it never
>>>>>> recvs a dhcp response.  If I manually bring the interface up with a
>>>>>> static IP address then I'm able to run typical IP traffic across the
>>>>>> link (aka, ping).  It would seem that when dhclient registers a packet
>>>>>> filter on the socket, that filter is preventing it from ever getting the
>>>>>> dhcp response.  The same dhclient works on any non-vlan interfaces in
>>>>>> the system, so the filter must work for non-vlan interfaces.  Aside from
>>>>>> the fact that the interface is a vlan, we also use a priority egress map
>>>>>> on the interface, and we use PFC flow control.  Let me know if you need
>>>>>> anymore to debug the issue, or email me off list and I can get you
>>>>>> logins to my reproducer machines.
>>>>>
>>>>> When you say 4.4-rc7 kernel is failing for you, what latest kernel version
>>>>> was working, where the socket filter was properly receiving the response on
>>>>> your vlan iface?
>>>>
>>>> v4.3 final works.  I haven't bisected where in the 4.4 series it quits
>>>> working.  I can do that tomorrow.
>>>
>>> I've tried to reproduce, but cannot seem to make dnsmasq work properly
>>> over vlan, so bisect would be great.
>>>
>>
>> Yeah, I've been working on it.  Issues with available machines that
>> reproduce combined with what hardware they have and whether or not that
>> hardware works at various steps in the bisection :-/
> 
> I've looked through all bpf related commits between v4.3..HEAD and don't see
> anything suspicious. Could it be that your setup exploited a bug that was fixed by 
> 28f9ee22bcdd ("vlan: Do not put vlan headers back on bridge and macvlan ports")
> 
> Could you also provide more details on vlan+dhcp setup to help narrow it
> down if bisect is taking too long.
> 

My bisection got down to the last few steps and just didn't make sense.
 So, I ended up starting it over.  I'm not sure how/why I saw that v4.3
worked the first time around, but the second time around it failed.  So
I also tried a pre-made 4.2.8-300 kernel from Fedora 23 and it failed as
well.  The problem at least spans 4.2 through 4.4, so it's been a while.
 I'll continue searching more kernels tomorrow, but I've been doing this
while I still have company in town for the holidays so I'm gonna go be
with them when I'm done writing this.

I've recently made some changes to my network setup here, so that might
be related to why I'm seeing it now.  I'll provide details on my test
setup in case any of it helps people on this:

Ethernet network is used for RDMA testing.  Switches are Mellanox 56GigE
switches.  The ports with multiple vlans are all set in hybrid mode,
untagged frames to vlan 40, tagged frames for vlans 43 and 45 allowed.
Switch has DCB enabled, priority 5 is no-drop, ports are set to use PFC
and MTU 9216 and LLDP is enabled on the ports as well.

The head node of the cluster runs dhcpd on the vlans (as well as the
InfiniBand ports).  The test machine has a static IP address configured
for each port/vlan in the server's config.

On the client, I've set the base interface to dhcp, vlan 43 to static IP
assignment, and vlan 45 to dhcp.  This allows me to see at a glance if
things are working since I know if the base device gets an IP and vlan
45 doesn't and instead times out and goes away, then the dhcp on the
vlan failed.  (I needed to set one vlan to static so the vlan creation
didn't depend on dhcp success because with some kernel versions and some
hardware types, namely mlx5, vlans weren't working at all and you could
mistake no vlans made for a problem with dhcp when it was really a
problem with vlans on mlx5 hardware).

This is the failing device's config:

[root@...a-perf-00 ~]$ more
/etc/sysconfig/network-scripts/ifcfg-mlx4_roce.45
DEVICE=mlx4_roce.45
VLAN=yes
VLAN_ID=45
REORDER_HDR=0
VLAN_EGRESS_PRIORITY_MAP=0:5,1:5,2:5,3:5,4:5,5:5,6:5,7:5
TYPE=Vlan
ONBOOT=yes
BOOTPROTO=dhcp
DEFROUTE=no
PEERDNS=no
PEERROUTES=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=no
IPV6_PEERDNS=no
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=mlx4_roce.45

And if the interface actually comes up, there is this NetworkManager
dispatcher script:

[root@...a-perf-00 ~]$ more
/etc/NetworkManager/dispatcher.d/98-mlx4_roce.45-egress.conf
#!/bin/sh
interface=$1
status=$2
[ "$interface" = mlx4_roce.45 ] || exit 0
case $status in
up)
	tc qdisc add dev mlx4_roce root mqprio num_tc 8 map 5 5 5 5 5 5 5 5 5 5
5 5 5 5 5 5 queues 32@0 32@32 32@64 32@96 32@128 32@160 32@192 32@224
	# tc_wrap.py -i mlx4_roce -u 5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5
	;;
esac


The base device's config file is this:

[root@...a-perf-00 ~]$ more /etc/sysconfig/network-scripts/ifcfg-mlx4_roce
DEVICE=mlx4_roce
TYPE=Ethernet
ONBOOT=yes
HWADDR=00:02:c9:31:77:91
BOOTPROTO=dhcp
DEFROUTE=no
PEERDNS=no
PEERROUTES=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=no
IPV6_PEERDNS=no
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
MTU=9000
NAME=mlx4_roce


Let me know if you need any more details on the setup.  I'll report back
when I've actually *really* identified when the bug appeared.

-- 
Doug Ledford <dledford@...hat.com>
              GPG KeyID: 0E572FDD



Download attachment "signature.asc" of type "application/pgp-signature" (885 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ