lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 13 Dec 2016 13:59:18 +0200
From:   Shahar Klein <shahark@...lanox.com>
To:     Cong Wang <xiyou.wangcong@...il.com>
CC:     <shahark@...lanox.com>, Daniel Borkmann <daniel@...earbox.net>,
        "Linux Kernel Network Developers" <netdev@...r.kernel.org>,
        Roi Dayan <roid@...lanox.com>,
        David Miller <davem@...emloft.net>,
        Jiri Pirko <jiri@...lanox.com>,
        John Fastabend <john.fastabend@...il.com>,
        Or Gerlitz <ogerlitz@...lanox.com>,
        Hadar Hen Zion <hadarh@...lanox.com>
Subject: Re: Soft lockup in tc_classify



On 12/12/2016 9:07 PM, Cong Wang wrote:
> On Mon, Dec 12, 2016 at 8:04 AM, Shahar Klein <shahark@...lanox.com> wrote:
>>
>>
>> On 12/12/2016 3:28 PM, Daniel Borkmann wrote:
>>>
>>> Hi Shahar,
>>>
>>> On 12/12/2016 10:43 AM, Shahar Klein wrote:
>>>>
>>>> Hi All,
>>>>
>>>> sorry for the spam, the first time was sent with html part and was
>>>> rejected.
>>>>
>>>> We observed an issue where a classifier instance next member is
>>>> pointing back to itself, causing a CPU soft lockup.
>>>> We found it by running traffic on many udp connections and then adding
>>>> a new flower rule using tc.
>>>>
>>>> We added a quick workaround to verify it:
>>>>
>>>> In tc_classify:
>>>>
>>>>          for (; tp; tp = rcu_dereference_bh(tp->next)) {
>>>>                  int err;
>>>> +               if (tp == tp->next)
>>>> +                     RCU_INIT_POINTER(tp->next, NULL);
>>>>
>>>>
>>>> We also had a print here showing tp->next is pointing to tp. With this
>>>> workaround we are not hitting the issue anymore.
>>>> We are not sure we fully understand the mechanism here - with the rtnl
>>>> and rcu locks.
>>>> We'll appreciate your help solving this issue.
>>>
>>>
>>> Note that there's still the RCU fix missing for the deletion race that
>>> Cong will still send out, but you say that the only thing you do is to
>>> add a single rule, but no other operation in involved during that test?
>
> Hmm, I thought RCU_INIT_POINTER() respects readers, but seems no?
> If so, that could be the cause since we play with the next pointer and
> there is only one filter in this case, but I don't see why we could have
> a loop here.
>
>>>
>>> Do you have a script and kernel .config for reproducing this?
>>
>>
>> I'm using a user space socket app(https://github.com/shahar-klein/noodle)on
>> a vm to push udp packets from ~2000 different udp src ports ramping up at
>> ~100 per second towards another vm on the same Hypervisor. Once the traffic
>> starts I'm pushing ingress flower tc udp rules(even_udp_src_port->mirred,
>> odd->drop) on the relevant representor in the Hypervisor.
>
> Do you mind to share your `tc filter show dev...` output? Also, since you
> mentioned you only add one flower filter, just want to make sure you never
> delete any filter before/when the bug happens? How reproducible is this?


The bridge between the two vms is based on ovs and representors.
We have a dpif in the ovs creating tc rules from ovs rules.
We set up 5000 open flow rules looks like this:
cook......, udp,dl_dst=24:8a:07:38:a2:b2,tp_src=7000 actions=drop
cook......, udp,dl_dst=24:8a:07:38:a2:b2,tp_src=7002 actions=drop
cook......, udp,dl_dst=24:8a:07:38:a2:b2,tp_src=7004 actions=drop
.
.
.

and then fire up 2000 udp flows starting at udp src 7000 and ramping up 
at 100 flows per second so after 20 seconds we suppose to have 2000 
active udp flows and half of them are dropped at the tc level.

The first packet of any such match hits the miss rule in the ovs 
datapath and pushed up to the user space ovs which consult the open 
flows rules above and translate the ovs rule to tc rule and push the 
rule back to the kernel via netlink.
I'm not sure I understand what happens to the second packet of the same 
match or all the following packets in the same match till the tc 
datapath is 'ready' for them.

The soft lockup is easily reproducible using this scenario but it won't 
happen if we use a much more easy traffic scheme first, say 100 udp 
flows at 3 per second.

I added a print and a panic when hitting the loop(output attached)

Also attached our .config


>
> Thanks!
>

Download attachment ".config.gz" of type "application/gzip" (34095 bytes)

Download attachment "tc_classify_panic.gz" of type "application/gzip" (1161 bytes)

Powered by blists - more mailing lists