lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAK6E8=fC4TRrJDBAEmtB9QK2XepWDa7iEupEfty3EnkdtBNzcA@mail.gmail.com>
Date:   Wed, 29 Aug 2018 12:57:14 -0700
From:   Yuchung Cheng <ycheng@...gle.com>
To:     Stephen Hemminger <stephen@...workplumber.org>
Cc:     netdev <netdev@...r.kernel.org>
Subject: Re: Fw: [Bug 200943] New: Repeating tcp_mark_head_lost in dmesg

On Wed, Aug 29, 2018 at 8:02 AM, Stephen Hemminger
<stephen@...workplumber.org> wrote:
>
>
>
> Begin forwarded message:
>
> Date: Sun, 26 Aug 2018 22:24:12 +0000
> From: bugzilla-daemon@...zilla.kernel.org
> To: stephen@...workplumber.org
> Subject: [Bug 200943] New: Repeating tcp_mark_head_lost in dmesg
>
>
> https://bugzilla.kernel.org/show_bug.cgi?id=200943
>
>             Bug ID: 200943
>            Summary: Repeating tcp_mark_head_lost in dmesg
>            Product: Networking
>            Version: 2.5
>     Kernel Version: 4.14.66
>           Hardware: All
>                 OS: Linux
>               Tree: Mainline
>             Status: NEW
>           Severity: normal
>           Priority: P1
>          Component: IPV4
>           Assignee: stephen@...workplumber.org
>           Reporter: rm+bko@...anrm.net
>         Regression: No
>
> Getting a bunch of these now every hour during continuous ~100 Mbit of network
> traffic.
> What's up with that? Seems harmless, as in the kernel doesn't crash and the
> network connection is not interrupted. (Maybe the particular TCP session is?)
> If there are no ill-effects from this condition, is such spammy WARN_ON really
> necessary?
This warning is likely triggered by buggy remote SACK behaviors, and
is pretty harmless - in my opinion the warning tcp_verify_left_out()
is still worthy to detect other inflight states inconsistencies.

The good news the particular loss recovery code path is disabled by
default on 4.18+ kernels by this patch

commit b38a51fec1c1f693f03b1aa19d0622123634d4b7
Author: Yuchung Cheng <ycheng@...gle.com>
Date:   Wed May 16 16:40:11 2018 -0700

    tcp: disable RFC6675 loss detection


>
> [Mon Aug 27 02:16:11 2018] ------------[ cut here ]------------
> [Mon Aug 27 02:16:11 2018] WARNING: CPU: 5 PID: 0 at net/ipv4/tcp_input.c:2263
> tcp_mark_head_lost+0x247/0x260
> [Mon Aug 27 02:16:11 2018] Modules linked in: dm_snapshot loop vhost_net vhost
> tap tun ip6t_MASQUERADE nf_nat_masquerade_ipv6 ipt_MASQUERADE
> nf_nat_masquerade_ipv4 xt_DSCP xt_mark ip6t_REJECT nf_reject_ipv6 ipt_REJECT
> nf_reject_ipv4 xt_owner xt_tcpudp xt_set ip_set_hash_net ip_set nfnetlink
> xt_limit xt_length xt_multiport xt_conntrack ip6t_rpfilter ipt_rpfilter
> ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_raw
> wireguard ip6_udp_tunnel udp_tunnel ip6table_mangle iptable_nat
> nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_raw
> iptable_mangle ip6table_filter ip6_tables matroxfb_base matroxfb_g450
> matroxfb_Ti3026 matroxfb_accel matroxfb_DAC1064 g450_pll matroxfb_misc
> iptable_filter ip_tables x_tables cpufreq_powersave cpufreq_userspace
> cpufreq_conservative 8021q garp mrp
> [Mon Aug 27 02:16:11 2018]  bridge stp llc bonding tcp_bbr sch_fq tcp_illinois
> fuse radeon ttm drm_kms_helper drm i2c_algo_bit it87 hwmon_vid eeepc_wmi
> asus_wmi sparse_keymap rfkill video wmi_bmof mxm_wmi edac_mce_amd kvm_amd kvm
> snd_pcm snd_timer snd soundcore joydev evdev pcspkr k10temp fam15h_power
> sp5100_tco sg shpchp wmi pcc_cpufreq acpi_cpufreq button ext4 crc16 mbcache
> jbd2 fscrypto btrfs zstd_decompress zstd_compress xxhash algif_skcipher af_alg
> dm_crypt dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio dm_mod
> hid_generic usbhid hid raid10 raid456 async_raid6_recov async_memcpy async_pq
> async_xor async_tx xor sd_mod raid6_pq libcrc32c crc32c_generic raid1 raid0
> multipath linear md_mod vfio_pci irqbypass vfio_virqfd vfio_iommu_type1 vfio
> ohci_pci crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel
> [Mon Aug 27 02:16:11 2018]  pcbc aesni_intel aes_x86_64 crypto_simd glue_helper
> cryptd r8169 ahci xhci_pci libahci ohci_hcd ehci_pci mii xhci_hcd ehci_hcd
> i2c_piix4 libata usbcore scsi_mod bnx2
> [Mon Aug 27 02:16:11 2018] CPU: 5 PID: 0 Comm: swapper/5 Tainted: G        W
>    4.14.66-rm1+ #132
> [Mon Aug 27 02:16:11 2018] Hardware name: To be filled by O.E.M. To be filled
> by O.E.M./SABERTOOTH 990FX R2.0, BIOS 2901 05/04/2016
> [Mon Aug 27 02:16:11 2018] task: ffff8ba79c679dc0 task.stack: ffffb4d741928000
> [Mon Aug 27 02:16:11 2018] RIP: 0010:tcp_mark_head_lost+0x247/0x260
> [Mon Aug 27 02:16:11 2018] RSP: 0018:ffff8ba7aed437d8 EFLAGS: 00010202
> [Mon Aug 27 02:16:11 2018] RAX: 0000000000000018 RBX: ffff8ba3901a0800 RCX:
> 0000000000000000
> [Mon Aug 27 02:16:11 2018] RDX: 0000000000000017 RSI: 0000000000000001 RDI:
> ffff8ba4d47e9000
> [Mon Aug 27 02:16:11 2018] RBP: ffff8ba4d47e9000 R08: 000000000000000d R09:
> 0000000000000000
> [Mon Aug 27 02:16:11 2018] R10: 000000000000100c R11: 0000000000000000 R12:
> 0000000000000001
> [Mon Aug 27 02:16:11 2018] R13: ffff8ba4d47e9158 R14: 0000000000000001 R15:
> 000000009d0b6708
> [Mon Aug 27 02:16:11 2018] FS:  0000000000000000(0000)
> GS:ffff8ba7aed40000(0000) knlGS:0000000000000000
> [Mon Aug 27 02:16:11 2018] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [Mon Aug 27 02:16:11 2018] CR2: 0000000001fbdff0 CR3: 000000040c8d6000 CR4:
> 00000000000406e0
> [Mon Aug 27 02:16:11 2018] Call Trace:
> [Mon Aug 27 02:16:11 2018]  <IRQ>
> [Mon Aug 27 02:16:11 2018]  tcp_fastretrans_alert+0x5c3/0xa20
> [Mon Aug 27 02:16:11 2018]  tcp_ack+0x95a/0x1170
> [Mon Aug 27 02:16:11 2018]  ? __slab_free.isra.70+0x79/0x200
> [Mon Aug 27 02:16:11 2018]  tcp_rcv_established+0x16a/0x5a0
> [Mon Aug 27 02:16:11 2018]  ? tcp_v4_inbound_md5_hash+0x76/0x1e0
> [Mon Aug 27 02:16:11 2018]  tcp_v4_do_rcv+0x130/0x1f0
> [Mon Aug 27 02:16:11 2018]  tcp_v4_rcv+0x9ac/0xaa0
> [Mon Aug 27 02:16:11 2018]  ip_local_deliver_finish+0x9a/0x1c0
> [Mon Aug 27 02:16:11 2018]  ip_local_deliver+0x6b/0xe0
> [Mon Aug 27 02:16:11 2018]  ? ip_rcv_finish+0x440/0x440
> [Mon Aug 27 02:16:11 2018]  ip_rcv+0x2b0/0x3c0
> [Mon Aug 27 02:16:11 2018]  ? inet_del_offload+0x50/0x50
> [Mon Aug 27 02:16:11 2018]  __netif_receive_skb_core+0x85f/0xb50
> [Mon Aug 27 02:16:11 2018]  ? br_allowed_egress+0x2d/0x50 [bridge]
> [Mon Aug 27 02:16:11 2018]  ? br_forward+0x49/0xe0 [bridge]
> [Mon Aug 27 02:16:11 2018]  ? br_vlan_lookup+0xdd/0x150 [bridge]
> [Mon Aug 27 02:16:11 2018]  netif_receive_skb_internal+0x34/0xe0
> [Mon Aug 27 02:16:11 2018]  ? br_handle_vlan+0x4b/0xf0 [bridge]
> [Mon Aug 27 02:16:11 2018]  br_pass_frame_up+0xd4/0x180 [bridge]
> [Mon Aug 27 02:16:11 2018]  ? br_allowed_ingress+0x1ea/0x2e0 [bridge]
> [Mon Aug 27 02:16:11 2018]  br_handle_frame_finish+0x23f/0x530 [bridge]
> [Mon Aug 27 02:16:11 2018]  ? get_partial_node.isra.69+0x13c/0x1d0
> [Mon Aug 27 02:16:11 2018]  br_handle_frame+0x1b7/0x320 [bridge]
> [Mon Aug 27 02:16:11 2018]  __netif_receive_skb_core+0x367/0xb50
> [Mon Aug 27 02:16:11 2018]  ? inet_gro_receive+0x203/0x2b0
> [Mon Aug 27 02:16:11 2018]  netif_receive_skb_internal+0x34/0xe0
> [Mon Aug 27 02:16:11 2018]  napi_gro_receive+0xb8/0xe0
> [Mon Aug 27 02:16:11 2018]  bnx2_poll_work+0x71a/0x12e0 [bnx2]
> [Mon Aug 27 02:16:11 2018]  bnx2_poll_msix+0x41/0xf0 [bnx2]
> [Mon Aug 27 02:16:11 2018]  net_rx_action+0x28c/0x3f0
> [Mon Aug 27 02:16:11 2018]  __do_softirq+0x10a/0x2a2
> [Mon Aug 27 02:16:11 2018]  irq_exit+0xbe/0xd0
> [Mon Aug 27 02:16:11 2018]  do_IRQ+0x66/0x100
> [Mon Aug 27 02:16:11 2018]  common_interrupt+0x7d/0x7d
> [Mon Aug 27 02:16:11 2018]  </IRQ>
> [Mon Aug 27 02:16:11 2018] RIP: 0010:cpuidle_enter_state+0xa4/0x2d0
> [Mon Aug 27 02:16:11 2018] RSP: 0018:ffffb4d74192bea0 EFLAGS: 00000246
> ORIG_RAX: ffffffffffffff1a
> [Mon Aug 27 02:16:11 2018] RAX: ffff8ba7aed61800 RBX: 0000f8f2a31961cb RCX:
> 000000000000001f
> [Mon Aug 27 02:16:11 2018] RDX: 0000f8f2a31961cb RSI: fffffff1cd4e0887 RDI:
> 0000000000000000
> [Mon Aug 27 02:16:11 2018] RBP: 0000000000000002 R08: 000000000000000a R09:
> 000000000000000a
> [Mon Aug 27 02:16:11 2018] R10: 0000000000000364 R11: 00000000000002a6 R12:
> ffff8ba7961b3200
> [Mon Aug 27 02:16:11 2018] R13: ffffffff90cb2c58 R14: 0000f8f2a3139a63 R15:
> ffffffff90cb2b80
> [Mon Aug 27 02:16:11 2018]  do_idle+0x19d/0x200
> [Mon Aug 27 02:16:11 2018]  cpu_startup_entry+0x6f/0x80
> [Mon Aug 27 02:16:11 2018]  start_secondary+0x1ae/0x200
> [Mon Aug 27 02:16:11 2018]  secondary_startup_64+0xa5/0xb0
> [Mon Aug 27 02:16:11 2018] Code: e8 df aa 00 00 85 c0 78 0c 0f b6 43 39 44 89
> e6 e9 16 ff ff ff 8b 95 ec 05 00 00 8b 85 80 06 00 00 03 85 84 06 00 00 39 d0
> 76 a7 <0f> 0b eb a3 31 f6 e9 12 fe ff ff 66 66 2e 0f 1f 84 00 00 00 00
> [Mon Aug 27 02:16:11 2018] ---[ end trace 3d7c0b943ef03b6a ]---
>
> --
> You are receiving this mail because:
> You are the assignee for the bug.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ