lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <70ecf4f3-4ca0-bceb-00fc-2084a338b2fc@itcare.pl>
Date:   Tue, 30 Oct 2018 01:34:24 +0100
From:   Paweł Staszewski <pstaszewski@...are.pl>
To:     Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: Latest net-next kernel 4.19.0+

W dniu 30.10.2018 o 01:11, Paweł Staszewski pisze:
> Sorry not complete - followed by hw csum:
>
> [  342.190831] vlan1490: hw csum failure
> [  342.190835] CPU: 52 PID: 0 Comm: swapper/52 Not tainted 4.19.0+ #1
> [  342.190836] Call Trace:
> [  342.190839]  <IRQ>
> [  342.190849]  dump_stack+0x46/0x5b
> [  342.190856]  __skb_checksum_complete+0x9a/0xa0
> [  342.190859]  tcp_v4_rcv+0xef/0x960
> [  342.190864]  ip_local_deliver_finish+0x49/0xd0
> [  342.190866]  ip_local_deliver+0x5e/0xe0
> [  342.190869]  ? ip_sublist_rcv_finish+0x50/0x50
> [  342.190870]  ip_rcv+0x41/0xc0
> [  342.190874]  __netif_receive_skb_one_core+0x4b/0x70
> [  342.190877]  netif_receive_skb_internal+0x2f/0xd0
> [  342.190879]  napi_gro_receive+0xb7/0xe0
> [  342.190884]  mlx5e_handle_rx_cqe+0x7a/0xd0
> [  342.190886]  mlx5e_poll_rx_cq+0xc6/0x930
> [  342.190888]  mlx5e_napi_poll+0xab/0xc90
> [  342.190893]  ? kmem_cache_free_bulk+0x1e4/0x280
> [  342.190895]  net_rx_action+0x1f1/0x320
> [  342.190901]  __do_softirq+0xec/0x2b7
> [  342.190908]  irq_exit+0x7b/0x80
> [  342.190910]  do_IRQ+0x45/0xc0
> [  342.190912]  common_interrupt+0xf/0xf
> [  342.190914]  </IRQ>
> [  342.190916] RIP: 0010:mwait_idle+0x5f/0x1b0
> [  342.190917] Code: a8 01 0f 85 3f 01 00 00 31 d2 65 48 8b 04 25 80 
> 4c 01 00 48 89 d1 0f 01 c8 48 8b 00 a8 08 0f 85 40 01 00 00 31 c0 fb 
> 0f 01 c9 <65> 8b 2d 2a c9 6a 7e 0f 1f 44 00 00 65 48 8b 04 25 80 4c 01 
> 00 f0
> [  342.190918] RSP: 0018:ffffc900034e7eb8 EFLAGS: 00000246 ORIG_RAX: 
> ffffffffffffffdd
> [  342.190920] RAX: 0000000000000000 RBX: 0000000000000034 RCX: 
> 0000000000000000
> [  342.190921] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 
> 0000000000000000
> [  342.190922] RBP: 0000000000000034 R08: 0000000000000057 R09: 
> ffff88086fa1fbc0
> [  342.190923] R10: 0000000000000000 R11: 00000001000028cc R12: 
> ffff88086d180000
> [  342.190923] R13: ffff88086d180000 R14: 0000000000000000 R15: 
> 0000000000000000
> [  342.190929]  do_idle+0x1a3/0x1c0
> [  342.190931]  cpu_startup_entry+0x14/0x20
> [  342.190934]  start_secondary+0x165/0x190
> [  342.190939]  secondary_startup_64+0xa4/0xb0
>
>
> W dniu 30.10.2018 o 01:10, Paweł Staszewski pisze:
>> Hi
>>
>>
>> Just checked in test lab latest kernel and have weird traces:
>>
>> [  219.888673] CPU: 52 PID: 0 Comm: swapper/52 Not tainted 4.19.0+ #1
>> [  219.888674] Call Trace:
>> [  219.888676]  <IRQ>
>> [  219.888685]  dump_stack+0x46/0x5b
>> [  219.888691]  __skb_checksum_complete+0x9a/0xa0
>> [  219.888694]  tcp_v4_rcv+0xef/0x960
>> [  219.888698]  ip_local_deliver_finish+0x49/0xd0
>> [  219.888700]  ip_local_deliver+0x5e/0xe0
>> [  219.888702]  ? ip_sublist_rcv_finish+0x50/0x50
>> [  219.888703]  ip_rcv+0x41/0xc0
>> [  219.888706]  __netif_receive_skb_one_core+0x4b/0x70
>> [  219.888708]  netif_receive_skb_internal+0x2f/0xd0
>> [  219.888710]  napi_gro_receive+0xb7/0xe0
>> [  219.888714]  mlx5e_handle_rx_cqe+0x7a/0xd0
>> [  219.888716]  mlx5e_poll_rx_cq+0xc6/0x930
>> [  219.888717]  mlx5e_napi_poll+0xab/0xc90
>> [  219.888722]  ? enqueue_task_fair+0x286/0xc40
>> [  219.888723]  ? enqueue_task_fair+0x1d6/0xc40
>> [  219.888725]  net_rx_action+0x1f1/0x320
>> [  219.888730]  __do_softirq+0xec/0x2b7
>> [  219.888736]  irq_exit+0x7b/0x80
>> [  219.888737]  do_IRQ+0x45/0xc0
>> [  219.888740]  common_interrupt+0xf/0xf
>> [  219.888742]  </IRQ>
>> [  219.888743] RIP: 0010:mwait_idle+0x5f/0x1b0
>> [  219.888745] Code: a8 01 0f 85 3f 01 00 00 31 d2 65 48 8b 04 25 80 
>> 4c 01 00 48 89 d1 0f 01 c8 48 8b 00 a8 08 0f 85 40 01 00 00 31 c0 fb 
>> 0f 01 c9 <65> 8b 2d 2a c9 6a 7e 0f 1f 44 00 00 65 48 8b 04 25 80 4c 
>> 01 00 f0
>> [  219.888746] RSP: 0018:ffffc900034e7eb8 EFLAGS: 00000246 ORIG_RAX: 
>> ffffffffffffffde
>> [  219.888749] RAX: 0000000000000000 RBX: 0000000000000034 RCX: 
>> 0000000000000000
>> [  219.888749] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 
>> 0000000000000000
>> [  219.888750] RBP: 0000000000000034 R08: 000000000000003b R09: 
>> ffff88086fa1fbc0
>> [  219.888751] R10: 0000000000000000 R11: 00000000ffffb15d R12: 
>> ffff88086d180000
>> [  219.888752] R13: ffff88086d180000 R14: 0000000000000000 R15: 
>> 0000000000000000
>> [  219.888754]  do_idle+0x1a3/0x1c0
>> [  219.888757]  cpu_startup_entry+0x14/0x20
>> [  219.888760]  start_secondary+0x165/0x190
>>
>
>

Also some perf top attacked to this - 14G rx traffic on vlans (pktgen 
generated random destination ip's and forwarded by test server)

    PerfTop:   45296 irqs/sec  kernel:99.3%  exact:  0.0% [4000Hz 
cycles],  (all, 56 CPUs)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

      7.43%  [kernel]       [k] mlx5e_skb_from_cqe_linear
      5.17%  [kernel]       [k] mlx5e_sq_xmit
      3.83%  [kernel]       [k] fib_table_lookup
      3.41%  [kernel]       [k] irq_entries_start
      2.91%  [kernel]       [k] build_skb
      2.50%  [kernel]       [k] mlx5_eq_int
      2.29%  [kernel]       [k] _raw_spin_lock
      2.27%  [kernel]       [k] tasklet_action_common.isra.21
      1.99%  [kernel]       [k] _raw_spin_lock_irqsave
      1.91%  [kernel]       [k] memcpy_erms
      1.77%  [kernel]       [k] __build_skb
      1.70%  [kernel]       [k] vlan_do_receive
      1.59%  [kernel]       [k] get_page_from_freelist
      1.56%  [kernel]       [k] mlx5e_poll_tx_cq
      1.53%  [kernel]       [k] __dev_queue_xmit
      1.40%  [kernel]       [k] pfifo_fast_dequeue
      1.37%  [kernel]       [k] dev_gro_receive
      1.34%  [kernel]       [k] ipt_do_table
      1.30%  [kernel]       [k] mlx5e_poll_rx_cq
      1.28%  [kernel]       [k] mlx5e_post_rx_wqes
      1.27%  [kernel]       [k] ip_finish_output2
      1.09%  [kernel]       [k] inet_gro_receive
      1.08%  [kernel]       [k] _raw_spin_lock_irq
      1.04%  [kernel]       [k] __sched_text_start
      0.99%  [kernel]       [k] tcp_gro_receive
      0.98%  [kernel]       [k] find_busiest_group
      0.97%  [kernel]       [k] __netif_receive_skb_core
      0.85%  [kernel]       [k] ip_route_input_rcu
      0.85%  [kernel]       [k] free_one_page
      0.84%  [kernel]       [k] mlx5e_handle_rx_cqe
      0.76%  [kernel]       [k] do_idle
      0.72%  [kernel]       [k] mlx5e_xmit
      0.71%  [kernel]       [k] cmd_exec
      0.71%  [kernel]       [k] __page_pool_put_page
      0.69%  [kernel]       [k] kmem_cache_alloc
      0.68%  [kernel]       [k] mlx5_cmd_comp_handler
      0.68%  [kernel]       [k] queued_spin_lock_slowpath
      0.68%  [kernel]       [k] cmd_work_handler
      0.68%  [kernel]       [k] pfifo_fast_enqueue
      0.67%  [kernel]       [k] try_to_wake_up
      0.66%  [kernel]       [k] _raw_spin_trylock
      0.62%  [kernel]       [k] dev_hard_start_xmit
      0.62%  [kernel]       [k] ip_forward
      0.62%  [kernel]       [k] swiotlb_map_page
      0.61%  [kernel]       [k] page_frag_free
      0.60%  [kernel]       [k] mlx5e_build_rx_skb
      0.60%  [kernel]       [k] skb_release_data
      0.57%  [kernel]       [k] netif_skb_features
      0.52%  [kernel]       [k] vlan_dev_hard_start_xmit
      0.50%  [kernel]       [k] kmem_cache_free_bulk
      0.49%  [kernel]       [k] enqueue_task_fair
      0.49%  [kernel]       [k] validate_xmit_skb.isra.142
      0.49%  [kernel]       [k] skb_gro_receive
      0.49%  [kernel]       [k] __qdisc_run

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ