lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 26 Oct 2017 11:48:09 +0200
From:   Guillaume Nault <g.nault@...halink.fr>
To:     Stephen Hemminger <stephen@...workplumber.org>
Cc:     netdev@...r.kernel.org, netfilter-devel@...r.kernel.org,
        Florian Westphal <fw@...len.de>, svimik@...il.com
Subject: Re: Fw: [Bug 197367] New: NMI watchdog: BUG: soft lockup - CPU#1
 stuck for 22s! [nf_conntrack]

On Tue, Oct 24, 2017 at 03:05:41PM +0200, Stephen Hemminger wrote:
> 
> 
> NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [openvpn:1436]
> ----cut----
> CPU: 1 PID: 1436 Comm: openvpn Not tainted 4.8.13-1.el6.elrepo.x86_64 #1
> Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2007
> task: ffff88003c564300 task.stack: ffff88003bfb0000
> RIP: 0010:[<ffffffffa030859f>]  [<ffffffffa030859f>]
> __nf_conntrack_find_get+0x3f/0x330 [nf_conntrack]
> ----cut----
> Call Trace:
> <IRQ>
> [<ffffffffa03084b0>] ? death_by_timeout+0x20/0x20 [nf_conntrack]
> [<ffffffffa0306ceb>] ? nf_ct_get_tuple+0x8b/0xb0 [nf_conntrack]
> [<ffffffffa0308ac0>] nf_conntrack_in+0x1e0/0x530 [nf_conntrack]
> [<ffffffffa032414c>] ipv4_conntrack_in+0x1c/0x20 [nf_conntrack_ipv4]
> [<ffffffff8169f782>] nf_iterate+0x72/0x90
> [<ffffffff816aa68f>] ? ip_rcv_finish+0x16f/0x3d0
> [<ffffffff8169f8dd>] nf_hook_slow+0x3d/0xc0
> [<ffffffff816aad46>] ip_rcv+0x2d6/0x3d0
> [<ffffffffa004a402>] ? virtqueue_add_inbuf+0x2/0x30 [virtio_ring]
> [<ffffffff816aa520>] ? inet_add_protocol+0x50/0x50
> [<ffffffffa00490fa>] ? virtqueue_notify+0x1a/0x40 [virtio_ring]
> [<ffffffff816669e0>] __netif_receive_skb_core+0x5b0/0x9f0
> [<ffffffffa01372d0>] ? start_xmit+0x110/0x210 [virtio_net]
> [<ffffffff810b796a>] ? update_cfs_rq_load_avg+0x29a/0x430
> [<ffffffff8179b640>] ? _raw_read_unlock_bh+0x20/0x30
> [<ffffffffa02d9900>] ? ebt_do_table+0x620/0x690 [ebtables]
> [<ffffffff81666e49>] __netif_receive_skb+0x29/0x70
> [<ffffffff81667067>] netif_receive_skb_internal+0x37/0x90
> [<ffffffff81667ed8>] netif_receive_skb+0x28/0x80
> ----cut----
>
I guess this could be caused by the performance regression that came
with rhtable conversion. The trace is a bit different from what I used
to see though.

If that's really the case, then it's fixed by e1bf1687740c
("netfilter: nat: Revert "netfilter: nat: convert nat bysrc hash to rhashtable"").

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ