lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aM-kMVtUhnqrcwrc@eldamar.lan>
Date: Sun, 21 Sep 2025 09:07:29 +0200
From: Salvatore Bonaccorso <carnil@...ian.org>
To: 1111455@...s.debian.org, 1111455-submitter@...s.debian.org,
	Benoit Panizzon <bp@....ch>
Cc: Max Kellermann <max.kellermann@...os.com>,
	David Howells <dhowells@...hat.com>,
	Paulo Alcantara <pc@...guebit.org>, netfs@...ts.linux.dev,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	stable@...r.kernel.org, regressions@...ts.linux.dev
Subject: Re: Bug#1111455: [bp@....ch: Bug#1111455:
 linux-image-6.12.41+deb13-amd64: kernel BUG at fs/netfs/read_collect.c:316
 netfs: Can't donate prior to front]

Hi Benoit,

On Mon, Aug 18, 2025 at 02:31:30PM +0200, Salvatore Bonaccorso wrote:
> Hi,
> 
> A user in Debian reported the following kernel oops when running on
> 6.12.41 (but apparently as well on older versions, though there were
> several netfs related similar issues, so including Max Kellermann as
> well in the recipients)
> 
> The report from Benoit Panizzon is as follows:
> 
> > From: Benoit Panizzon <bp@....ch>
> > Resent-From: Benoit Panizzon <bp@....ch>
> > Reply-To: Benoit Panizzon <bp@....ch>, 1111455@...s.debian.org
> > X-Mailer: reportbug 13.2.0
> > Date: Mon, 18 Aug 2025 10:24:32 +0200
> > To: Debian Bug Tracking System <submit@...s.debian.org>
> > Subject: Bug#1111455: linux-image-6.12.41+deb13-amd64: kernel BUG at fs/netfs/read_collect.c:316 netfs: Can't donate
> > 	prior to front
> > Delivered-To: lists-debian-kernel@...del.debian.org
> > Delivered-To: submit@...s.debian.org
> > Message-ID: <175550547264.3745.5845128440223069497.reportbug@...imp.ch>
> > 
> > Package: src:linux
> > Version: 6.12.41-1
> > Severity: grave
> > Justification: renders package unusable
> > X-Debbugs-Cc: debian-amd64@...ts.debian.org
> > User: debian-amd64@...ts.debian.org
> > Usertags: amd64
> > 
> > Dear Maintainer,
> > 
> > Updated my workstation from Bookworm to Trixie. /home on NFS
> > 
> > Applications accessing data on NFS shares become unresponsive one after the other after a couple of minutes.
> > 
> > Especially affected:
> > * Claws-Mail
> > * Chromium
> > 
> > Suspected cachefsd being the culpit and disabled - issue persists.
> > 
> > It looks like an invalid opcode is being used. Fairly recent CPU in use:
> > 
> > vendor_id	: AuthenticAMD
> > cpu family	: 25
> > model		: 80
> > model name	: AMD Ryzen 7 PRO 5750GE with Radeon Graphics
> > stepping	: 0
> > microcode	: 0xa500011
> > 
> > Google found reports of others affected with similar kernel versions mostly when accessing SMB shares.
> > 
> > [Mo Aug 18 10:11:19 2025] netfs: Can't donate prior to front
> > [Mo Aug 18 10:11:19 2025] R=00001e07[4] s=6000-7fff 0/2000/2000
> > [Mo Aug 18 10:11:19 2025] folio: 4000-7fff
> > [Mo Aug 18 10:11:19 2025] donated: prev=0 next=0
> > [Mo Aug 18 10:11:19 2025] s=6000 av=2000 part=2000
> > [Mo Aug 18 10:11:19 2025] ------------[ cut here ]------------
> > [Mo Aug 18 10:11:19 2025] kernel BUG at fs/netfs/read_collect.c:316!
> > [Mo Aug 18 10:11:19 2025] Oops: invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
> > [Mo Aug 18 10:11:19 2025] CPU: 5 UID: 0 PID: 115 Comm: kworker/u64:1 Not tainted 6.12.41+deb13-amd64 #1  Debian 6.12.41-1
> > [Mo Aug 18 10:11:19 2025] Hardware name: LENOVO 11JN000JGE/32E4, BIOS M47KT26A 11/23/2022
> > [Mo Aug 18 10:11:19 2025] Workqueue: nfsiod rpc_async_release [sunrpc]
> > [Mo Aug 18 10:11:19 2025] RIP: 0010:netfs_consume_read_data.isra.0+0xb79/0xb80 [netfs]
> > [Mo Aug 18 10:11:19 2025] Code: 48 89 ea 31 f6 48 c7 c7 96 95 6f c2 e8 d0 8d a2 e1 48 8b 4c 24 10 4c 89 fe 48 8b 54 24 20 48 c7 c7 b2 95 6f c2 e8 b7 8d a2 e1 <0f> 0b 90 0f 1f 40 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
> > [Mo Aug 18 10:11:19 2025] RSP: 0018:ffffb4234057bd58 EFLAGS: 00010246
> > [Mo Aug 18 10:11:19 2025] RAX: 0000000000000018 RBX: ffff9aed62651ec0 RCX: 0000000000000027
> > [Mo Aug 18 10:11:19 2025] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff9af04e4a1740
> > [Mo Aug 18 10:11:19 2025] RBP: 0000000000000000 R08: 0000000000000000 R09: ffffb4234057bbe8
> > [Mo Aug 18 10:11:19 2025] R10: ffffffffa50b43a8 R11: 0000000000000003 R12: ffff9aed6c9081e8
> > [Mo Aug 18 10:11:19 2025] R13: 0000000000004000 R14: ffff9aed6c9081e8 R15: 0000000000006000
> > [Mo Aug 18 10:11:19 2025] FS:  0000000000000000(0000) GS:ffff9af04e480000(0000) knlGS:0000000000000000
> > [Mo Aug 18 10:11:19 2025] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [Mo Aug 18 10:11:19 2025] CR2: 00003d3407849020 CR3: 00000002f1822000 CR4: 0000000000f50ef0
> > [Mo Aug 18 10:11:19 2025] PKRU: 55555554
> > [Mo Aug 18 10:11:19 2025] Call Trace:
> > [Mo Aug 18 10:11:19 2025]  <TASK>
> > [Mo Aug 18 10:11:19 2025]  netfs_read_subreq_terminated+0x2ab/0x3e0 [netfs]
> > [Mo Aug 18 10:11:19 2025]  nfs_netfs_read_completion+0x9c/0xc0 [nfs]
> > [Mo Aug 18 10:11:19 2025]  nfs_read_completion+0xf6/0x130 [nfs]
> > [Mo Aug 18 10:11:19 2025]  rpc_free_task+0x39/0x60 [sunrpc]
> > [Mo Aug 18 10:11:19 2025]  rpc_async_release+0x2f/0x40 [sunrpc]
> > [Mo Aug 18 10:11:19 2025]  process_one_work+0x177/0x330
> > [Mo Aug 18 10:11:19 2025]  worker_thread+0x251/0x390
> > [Mo Aug 18 10:11:19 2025]  ? __pfx_worker_thread+0x10/0x10
> > [Mo Aug 18 10:11:19 2025]  kthread+0xd2/0x100
> > [Mo Aug 18 10:11:19 2025]  ? __pfx_kthread+0x10/0x10
> > [Mo Aug 18 10:11:19 2025]  ret_from_fork+0x34/0x50
> > [Mo Aug 18 10:11:19 2025]  ? __pfx_kthread+0x10/0x10
> > [Mo Aug 18 10:11:19 2025]  ret_from_fork_asm+0x1a/0x30
> > [Mo Aug 18 10:11:19 2025]  </TASK>
> > [Mo Aug 18 10:11:19 2025] Modules linked in: nfsv3 rpcsec_gss_krb5 nfsv4 dns_resolver nfs netfs nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat nf_tables libcrc32c wireguard libchacha20poly1305 chacha_x86_64 poly1305_x86_64 curve25519_x86_64 libcurve25519_generic libchacha vxlan ip6_udp_tunnel udp_tunnel bridge stp llc btusb btrtl btintel btbcm btmtk bluetooth joydev qrtr nfsd auth_rpcgss binfmt_misc nfs_acl lockd grace nls_ascii nls_cp437 sunrpc vfat fat amd_atl intel_rapl_msr intel_rapl_common rtw88_8822ce rtw88_8822c edac_mce_amd rtw88_pci snd_sof_amd_rembrandt snd_sof_amd_acp rtw88_core kvm_amd snd_sof_pci snd_sof_xtensa_dsp snd_sof snd_hda_codec_realtek mac80211 kvm snd_hda_codec_generic snd_sof_utils snd_hda_scodec_component snd_soc_core snd_hda_codec_hdmi snd_hda_intel snd_compress snd_intel_dspcfg libarc4 snd_pcm_dmaengine irqbypass snd_intel_sdw_acpi snd_pci_ps crct10dif_pclmul snd_hda_codec ghash_clmulni_intel snd_rpl_pci_acp6x cfg80211 snd_hda_core sha512_ssse3 snd_acp_pci
> > [Mo Aug 18 10:11:19 2025]  sha256_ssse3 snd_acp_legacy_common sha1_ssse3 snd_hwdep snd_pci_acp6x aesni_intel snd_pcm gf128mul snd_pci_acp5x crypto_simd think_lmi snd_timer snd_rn_pci_acp3x cryptd firmware_attributes_class snd_acp_config wmi_bmof snd snd_soc_acpi ee1004 rapl snd_pci_acp3x pcspkr ccp k10temp rfkill soundcore evdev parport_pc ppdev lp parport configfs efi_pstore nfnetlink efivarfs ip_tables x_tables autofs4 ext4 mbcache jbd2 crc32c_generic hid_plantronics hid_generic usbhid hid amdgpu dm_mod amdxcp drm_exec gpu_sched drm_buddy i2c_algo_bit drm_suballoc_helper drm_display_helper cec rc_core drm_ttm_helper xhci_pci ttm xhci_hcd drm_kms_helper drm r8169 nvme usbcore realtek sp5100_tco nvme_core mdio_devres watchdog libphy crc32_pclmul i2c_piix4 video crc32c_intel usb_common nvme_auth i2c_smbus crc16 wmi button
> > [Mo Aug 18 10:11:19 2025] ---[ end trace 0000000000000000 ]---
> 
> Any ideas here? Benoit can you please es well test the current 6.16.1
> ideally to verify if the problem persists there as well?

Can you still reproduce this with a more current 6.12.y version? Can
you test as well the newest version of the 6.16.y version (even there
was a major refactoring to see if it affects only the 6.12.y series)?

Regards,
Salvatore

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ