lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <3B91CCFCFC433E4596591DB9A191998574453125@nkgeml514-mbx.china.huawei.com>
Date:   Thu, 7 Sep 2017 10:18:02 +0000
From:   Songchuan <songchuan@...wei.com>
To:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC:     "weiwan@...gle.com" <weiwan@...gle.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "Wanghao (R)" <zidane.wanghao@...wei.com>,
        sunjilong <sunjilong1@...wei.com>,
        "Liwei (Sirius)" <sirius.liwei@...wei.com>,
        sangguanlin <sangguanlin@...wei.com>,
        "Xuhui (hunter, Device company )" <hunter.xuhui@...wei.com>,
        Eric Dumazet <edumazet@...gle.com>
Subject: Can you help to analyze this panic issue when using VPN?

HI, ALL

Did you ever encount the issue, panic will occur when using VPN by probably, the stack is a following:

<0>[18581.550354s][pid:678,cpu5,hisi_rxdata]Call trace:
<4>[18581.550354s][pid:678,cpu5,hisi_rxdata][<ffffffc000e50004>] ipv6_rcv+0x100/0x50c
<4>[18581.550384s][pid:678,cpu5,hisi_rxdata][<ffffffc000d5ced8>] __netif_receive_skb_core+0x2ac/0xa04
<4>[18581.550384s][pid:678,cpu5,hisi_rxdata][<ffffffc000d5eeb8>] __netif_receive_skb+0x2c/0x84
<4>[18581.550384s][pid:678,cpu5,hisi_rxdata][<ffffffc000d5fb20>] process_backlog+0xa4/0x164
<4>[18581.550415s][pid:678,cpu5,hisi_rxdata][<ffffffc000d60cf0>] net_rx_action+0x1e8/0x358
<4>[18581.550415s][pid:678,cpu5,hisi_rxdata][<ffffffc0000a4d28>] __do_softirq+0xcc/0x3a4
<4>[18581.550415s][pid:678,cpu5,hisi_rxdata][<ffffffc0000a50ac>] do_softirq+0x5c/0x60
<4>[18581.550415s][pid:678,cpu5,hisi_rxdata][<ffffffc000d5cb0c>] netif_rx_ni+0x124/0x12c
<4>[18581.550445s][pid:678,cpu5,hisi_rxdata][<ffffffc000bd0564>] hmac_rxdata_thread+0x88/0x8c
<4>[18581.550445s][pid:678,cpu5,hisi_rxdata][<ffffffc0000bf3d8>] kthread+0xdc/0xf0
<0>[18581.550445s][pid:678,cpu5,hisi_rxdata]Code: f9402e60 f27ff800 54001380 f940a000 (f9400000) 
<4>[18581.550476s][pid:678,cpu5,hisi_rxdata]---[ end trace 244e1ff5a3cd0017 ]---
<0>[18581.550476s][pid:678,cpu5,hisi_rxdata]Kernel panic - not syncing: Fatal exception in interrupt

We also use the kasan version to reproduce this issue, the message like this:


4>[ 1730.083587s][pid:1928,cpu3,HeapTaskDaemon]Disabling lock debugging due to kernel taint
<3>[ 1730.083618s][pid:1928,cpu3,HeapTaskDaemon]INFO: Allocated in dst_alloc+0x80/0x2a0 age=1315 cpu=3 pid=343
<3>[ 1730.083679s][pid:1928,cpu3,HeapTaskDaemon]          alloc_debug_processing+0x198/0x1a0
<3>[ 1730.083709s][pid:1928,cpu3,HeapTaskDaemon]          __slab_alloc.isra.63.constprop.65+0x6ec/0x738
<3>[ 1730.083740s][pid:1928,cpu3,HeapTaskDaemon]          kmem_cache_alloc+0x154/0x250
<3>[ 1730.083770s][pid:1928,cpu3,HeapTaskDaemon]          dst_alloc+0x80/0x2a0
<3>[ 1730.083801s][pid:1928,cpu3,HeapTaskDaemon]          rt_dst_alloc+0x70/0x80
<3>[ 1730.083831s][pid:1928,cpu3,HeapTaskDaemon]          ip_route_input_noref+0x2dc/0xdd4
<3>[ 1730.083862s][pid:1928,cpu3,HeapTaskDaemon]          ip_rcv_finish+0x348/0x52c
<3>[ 1730.083892s][pid:1928,cpu3,HeapTaskDaemon]          ip_rcv+0x548/0x708
<3>[ 1730.083923s][pid:1928,cpu3,HeapTaskDaemon]          __netif_receive_skb_core+0x52c/0xf28
<3>[ 1730.083953s][pid:1928,cpu3,HeapTaskDaemon]          __netif_receive_skb+0x40/0xcc
<3>[ 1730.083984s][pid:1928,cpu3,HeapTaskDaemon]          process_backlog+0x114/0x244
<3>[ 1730.084014s][pid:1928,cpu3,HeapTaskDaemon]          net_rx_action+0x3e4/0x64c
<3>[ 1730.084075s][pid:1928,cpu3,HeapTaskDaemon]          __do_softirq+0x110/0x574
<3>[ 1730.084106s][pid:1928,cpu3,HeapTaskDaemon]          irq_exit+0xc0/0xf4
<3>[ 1730.084136s][pid:1928,cpu3,HeapTaskDaemon]          handle_IPI+0x3d4/0x3f0
<3>[ 1730.084167s][pid:1928,cpu3,HeapTaskDaemon]          gic_handle_irq+0x88/0x8c
<3>[ 1730.084197s][pid:1928,cpu3,HeapTaskDaemon]INFO: Freed in dst_destroy+0x134/0x1c4 age=2758 cpu=4 pid=30
<3>[ 1730.084228s][pid:1928,cpu3,HeapTaskDaemon]          free_debug_processing+0x2ec/0x360
<3>[ 1730.084259s][pid:1928,cpu3,HeapTaskDaemon]          __slab_free+0x308/0x448
<3>[ 1730.084289s][pid:1928,cpu3,HeapTaskDaemon]          kmem_cache_free+0x274/0x28c
<3>[ 1730.084320s][pid:1928,cpu3,HeapTaskDaemon]          dst_destroy+0x134/0x1c4
<3>[ 1730.084350s][pid:1928,cpu3,HeapTaskDaemon]          free_fib_info_rcu+0x248/0x310
<3>[ 1730.084381s][pid:1928,cpu3,HeapTaskDaemon]          rcu_process_callbacks+0x6f0/0x9cc
<3>[ 1730.084411s][pid:1928,cpu3,HeapTaskDaemon]          __do_softirq+0x110/0x574
<3>[ 1730.084442s][pid:1928,cpu3,HeapTaskDaemon]          run_ksoftirqd+0x4c/0x60
<3>[ 1730.084472s][pid:1928,cpu3,HeapTaskDaemon]          smpboot_thread_fn+0x298/0x404
<3>[ 1730.084533s][pid:1928,cpu3,HeapTaskDaemon]          kthread+0x190/0x1ac
<3>[ 1730.084564s][pid:1928,cpu3,HeapTaskDaemon]          ret_from_fork+0x10/0x50
<3>[ 1730.084594s][pid:1928,cpu3,HeapTaskDaemon]INFO: Slab 0xffffffbdc033bc00 objects=16 used=14 fp=0xffffffc00caf1c00 flags=0x8100
<3>[ 1730.084594s][pid:1928,cpu3,HeapTaskDaemon]INFO: Object 0xffffffc00caf1a00 @offset=6656 fp=0x          (null)


<3>[ 1730.085083s][pid:1928,cpu3,HeapTaskDaemon]Padding ffffffc00caf1bf8: 00 00 00 00 00 00 00 00                          ........
<4>[ 1730.085113s][pid:1928,cpu3,HeapTaskDaemon]CPU: 3 PID: 1928 Comm: HeapTaskDaemon Tainted: G    B   W       4.1.18-kasan-g6e99722-dirty #1
<4>[ 1730.085144s][pid:1928,cpu3,HeapTaskDaemon]TGID: 1911 Comm: ndroid.settings
<4>[ 1730.085174s][pid:1928,cpu3,HeapTaskDaemon]Hardware name: hi6250 (DT)
<0>[ 1730.085205s][pid:1928,cpu3,HeapTaskDaemon]Call trace:
<4>[ 1730.085235s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc00008d548>] dump_backtrace+0x0/0x1f4
<4>[ 1730.085266s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc00008d75c>] show_stack+0x20/0x28
<4>[ 1730.085296s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc0016bbd20>] dump_stack+0x84/0xa8
<4>[ 1730.085327s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc00025fbe8>] print_trailer+0x11c/0x1b0
<4>[ 1730.085357s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc00026443c>] object_err+0x4c/0x5c
<4>[ 1730.085388s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc0002667dc>] kasan_report+0x240/0x574
<4>[ 1730.085418s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc000265708>] __asan_loadN+0x18c/0x1f0
<4>[ 1730.085449s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc0015ae338>] ipv6_rcv+0x244/0xb50
<4>[ 1730.085479s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc00141e9f4>] __netif_receive_skb_core+0x52c/0xf28
<4>[ 1730.085510s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc001421e04>] __netif_receive_skb+0x40/0xcc
<4>[ 1730.085540s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc001423530>] process_backlog+0x114/0x244
<4>[ 1730.085571s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc001425738>] net_rx_action+0x3e4/0x64c
<4>[ 1730.085601s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc0000b3a48>] __do_softirq+0x110/0x574
<4>[ 1730.085632s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc0000b41f8>] irq_exit+0xc0/0xf4
<4>[ 1730.085662s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc0000936c4>] handle_IPI+0x3d4/0x3f0
<4>[ 1730.085693s][pid:1928,cpu3,HeapTaskDaemon][<ffffffc000081830>] gic_handle_irq+0x88/0x8c

The dump on vpn using and kasan is attached, Can you help to analyze this panic issue, thanks.
-----邮件原件-----
发件人: Eric Dumazet [mailto:edumazet@...gle.com] 
发送时间: 2017年9月7日 17:06
收件人: Songchuan
抄送: weiwan@...gle.com; davem@...emloft.net; Wanghao (R); sunjilong; Liwei (Sirius); sangguanlin; Xuhui (hunter, Device company )
主题: Re: Can you help to analyze this panic issue when using VPN?

On Wed, Sep 6, 2017 at 11:43 PM, Songchuan <songchuan@...wei.com> wrote:
> Did you ever encount the issue?
>

You will have to send a mail in plain text, no HTML, otherwise it wont reach netdev@ mailing list.

We wont reply to you until you get this right, because we do not want part of the thread being not visible to most of the people/experts.

Thanks.

Download attachment "stack_using_vpn_ramoops-0" of type "application/octet-stream" (219136 bytes)

Download attachment "dump_kasan_scan_ramoops.bin" of type "application/octet-stream" (218453 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ