[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOuPNLjpMaa1hC-VOzN_aQCuLt=T6PkgC3NBBDp8BBE5xhHTew@mail.gmail.com>
Date: Sat, 29 Jun 2024 00:14:28 +0530
From: Pintu Agarwal <pintu.ping@...il.com>
To: dm-devel@...hat.com, open list <linux-kernel@...r.kernel.org>, snitzer@...hat.com,
agk@...hat.com
Subject: device-mapper: verity: 251:0: metadata block 18398 is corrupted
Hi,
In one of our NAND products (arm64) we are having Kernel 5.15 along
with squashfs + ubiblock + dm-verity on rootfs along with ramdisk.
Recently we have enabled "NAND Page Cache" in our nand driver to
improve nand read performance.
But after enabling this feature, the dm-verity detects metadata
corruption during boot-up and the device is crashing.
If we disable dm-verity (from ramdisk) then we don't see this problem
and the device boots fine.
Are there any dm-verity specific changes (in higher Kernel version)
that we are missing here ?
This is the failure log.
What does the below metadata corruption indicate ?
Does this mean, the root_hash is corrupted ?
{{{
[ 7.731295][ T136] device-mapper: verity: 251:0: metadata block
18398 is corrupted
[ 7.742723][ T136] ------------[ cut here ]------------
[ 7.748206][ T136] workqueue: WQ_MEM_RECLAIM kverityd:verity_work
is flushing !WQ_MEM_RECLAIM k_sm_usb:0x0
[ 7.754840][ T136] WARNING: CPU: 3 PID: 136 at
kernel/workqueue.c:2660 check_flush_dependency+0x12c/0x134
[...]
[ 7.809215][ T136] pc : check_flush_dependency+0x12c/0x134
[ 7.814933][ T136] lr : check_flush_dependency+0x12c/0x134
[...]
[ 7.905120][ T136] Call trace:
[ 7.908345][ T136] check_flush_dependency+0x12c/0x134
[ 7.913710][ T136] flush_workqueue+0x1cc/0x4dc
[ 7.918452][ T136] dwc3_msm_shutdown+0x48/0x58
[ 7.923195][ T136] platform_shutdown+0x24/0x30
[ 7.927937][ T136] device_shutdown+0x170/0x220
[ 7.932680][ T136] kernel_restart+0x40/0xfc
[ 7.937152][ T136] verity_handle_err+0x11c/0x1b0
[ 7.942071][ T136] verity_hash_for_block+0x260/0x2d8
[ 7.947343][ T136] verity_verify_io+0xe8/0x568
[ 7.952085][ T136] verity_work+0x24/0x74
[ 7.956297][ T136] process_one_work+0x1a8/0x3a0
[ 7.961133][ T136] worker_thread+0x22c/0x490
[ 7.965700][ T136] kthread+0x154/0x218
[ 7.969725][ T136] ret_from_fork+0x10/0x20
[ 7.974116][ T136] ---[ end trace 166e4069e91d0a01 ]---
}}}
Thanks,
Pintu
Powered by blists - more mailing lists