[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ba2d19ca-a39c-4ed7-979e-7b33f4ffdb5a@o2.pl>
Date: Wed, 26 Jun 2024 21:13:18 +0200
From: Mateusz Jończyk <mat.jonczyk@...pl>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>, stable@...r.kernel.org
Cc: patches@...ts.linux.dev, linux-kernel@...r.kernel.org,
torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
linux@...ck-us.net, shuah@...nel.org, patches@...nelci.org,
lkft-triage@...ts.linaro.org, pavel@...x.de, jonathanh@...dia.com,
f.fainelli@...il.com, sudipm.mukherjee@...il.com, srw@...dewatkins.net,
rwarsow@....de, conor@...nel.org, allen.lkml@...il.com, broonie@...nel.org
Subject: Re: [PATCH 6.1 000/131] 6.1.96-rc1 review
W dniu 25.06.2024 o 11:32, Greg Kroah-Hartman pisze:
> This is the start of the stable review cycle for the 6.1.96 release.
> There are 131 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Thu, 27 Jun 2024 08:54:55 +0000.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v6.x/stable-review/patch-6.1.96-rc1.gz
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-6.1.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
Hello,
Tested-by: Mateusz Jończyk <mat.jonczyk@...pl>
Tested on a HP 17-by0001nw laptop with an Intel Kaby Lake CPU and Ubuntu 20.04.
Issues found:
- NVMe drive failed shortly after resume from suspend:
pcieport 0000:00:1d.0: AER: Corrected error message received from 0000:00:1d.0
pcieport 0000:00:1d.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID)
pcieport 0000:00:1d.0: device [8086:9d18] error status/mask=00000001/00002000
pcieport 0000:00:1d.0: [ 0] RxErr
[... repeats around 20 times ]
nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10
nvme nvme0: Does your device have a faulty power saving mode enabled?
nvme nvme0: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off" and report a bug
nvme 0000:03:00.0: enabling device (0000 -> 0002)
nvme nvme0: Removing after probe failure status: -19
nvme0n1: detected capacity change from 1000215216 to 0
[...]
md/raid1:md1: Disk failure on nvme0n1p3, disabling device.
md/raid1:md1: Operation continuing on 1 devices.
After a cold reboot, the drive is visible again and functioning apparently
normally. SMART data claims it is healthy. Previously this happened 3 weeks
ago, on Linux 5.15.0-107-generic from Ubuntu, also shortly after a resume
from suspend. As no recent patches in Linux stable appear to touch NVMe / PCIe,
I'm giving a Tested-by: nonetheless.
Stack:
- amd64,
- ext4 on top of LVM on top of LUKS on top of mdraid on top of
NVMe and SATA drives (the SATA drive in a write-mostly mode).
Tested (lightly):
- suspend to RAM,
- suspend to disk,
- virtual machines in QEMU (both i386 and amd64 guests),
- Bluetooth (Realtek RTL8822BE),
- GPU (Intel HD Graphics 620, tested with two Unigine benchmarks)
- WiFi (Realtek RTL8822BE),
- webcam.
Filesystems tested with fsstress:
- ext4,
- NFS client,
- exFAT,
- NTFS via FUSE (ntfs3g).
Greetings,
Mateusz
Powered by blists - more mailing lists