[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <79ab5533-82d1-4f06-461b-689e94f738ec@huaweicloud.com>
Date: Wed, 20 Aug 2025 15:17:16 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: kernel test robot <oliver.sang@...el.com>, Rajeev Mishra <rajeevm@....com>
Cc: oe-lkp@...ts.linux.dev, lkp@...el.com, linux-block@...r.kernel.org,
axboe@...nel.dk, yukuai1@...weicloud.com, linux-kernel@...r.kernel.org,
"yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH v4 2/2] loop: use vfs_getattr_nosec for accurate file size
Hi,
在 2025/08/20 12:55, kernel test robot 写道:
>
>
> Hello,
>
> kernel test robot noticed "xfstests.generic.563.fail" on:
>
> commit: fb455b8a6ac932603a8c0dbb787f8330b0924834 ("[PATCH v4 2/2] loop: use vfs_getattr_nosec for accurate file size")
> url: https://github.com/intel-lab-lkp/linux/commits/Rajeev-Mishra/loop-use-vfs_getattr_nosec-for-accurate-file-size/20250815-031401
> base: https://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git for-next
> patch link: https://lore.kernel.org/all/20250814191004.60340-2-rajeevm@hpe.com/
> patch subject: [PATCH v4 2/2] loop: use vfs_getattr_nosec for accurate file size
>
> in testcase: xfstests
> version: xfstests-x86_64-e1e4a0ea-1_20250714
> with following parameters:
>
> disk: 4HDD
> fs: ext4
> test: generic-563
>
>
>
> config: x86_64-rhel-9.4-func
> compiler: gcc-12
> test machine: 4 threads Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz (Skylake) with 32G memory
>
> (please refer to attached dmesg/kmsg for entire log/backtrace)
>
>
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <oliver.sang@...el.com>
> | Closes: https://lore.kernel.org/oe-lkp/202508200409.b2459c02-lkp@intel.com
>
> 2025-08-17 21:02:18 export TEST_DIR=/fs/sda1
> 2025-08-17 21:02:18 export TEST_DEV=/dev/sda1
> 2025-08-17 21:02:18 export FSTYP=ext4
> 2025-08-17 21:02:18 export SCRATCH_MNT=/fs/scratch
> 2025-08-17 21:02:18 mkdir /fs/scratch -p
> 2025-08-17 21:02:18 export SCRATCH_DEV=/dev/sda4
> 2025-08-17 21:02:18 echo generic/563
> 2025-08-17 21:02:18 ./check -E tests/exclude/ext4 generic/563
> FSTYP -- ext4
> PLATFORM -- Linux/x86_64 lkp-skl-d03 6.17.0-rc1-00020-gfb455b8a6ac9 #1 SMP PREEMPT_DYNAMIC Mon Aug 18 03:05:49 CST 2025
> MKFS_OPTIONS -- -F /dev/sda4
> MOUNT_OPTIONS -- -o acl,user_xattr /dev/sda4 /fs/scratch
>
> generic/563 [failed, exit status 1]- output mismatch (see /lkp/benchmarks/xfstests/results//generic/563.out.bad)
> --- tests/generic/563.out 2025-07-14 17:48:52.000000000 +0000
> +++ /lkp/benchmarks/xfstests/results//generic/563.out.bad 2025-08-17 21:02:31.367411171 +0000
> @@ -1,14 +1 @@
> QA output created by 563
> -read/write
> -read is in range
> -write is in range
> -write -> read/write
> -read is in range
> -write is in range
> ...
> (Run 'diff -u /lkp/benchmarks/xfstests/tests/generic/563.out /lkp/benchmarks/xfstests/results//generic/563.out.bad' to see the entire diff)
> Ran: generic/563
> Failures: generic/563
> Failed 1 of 1 tests
>
This can be reporduce with just losetup /dev/loop0 /dev/sda, root cause
is that /dev/sda is from devtmpfs wherer the get_attr method for
is shmem_getattr, hence stat->size will be set to zero.
In vfs_getattr_nosec(), is the inode is block device, bdev_statx will be
called to override the result, however, STATX_SIZE is not handled here,
I feel handle STATX_SIZE in bdev_statx will make sense:
diff --git a/block/bdev.c b/block/bdev.c
index b77ddd12dc06..9672bb6ec4ad 100644
--- a/block/bdev.c
+++ b/block/bdev.c
@@ -1324,6 +1324,9 @@ void bdev_statx(const struct path *path, struct
kstat *stat, u32 request_mask)
if (!bdev)
return;
+ if (request_mask & STATX_SIZE)
+ stat->size = bdev_nr_bytes(bdev);
+
if (request_mask & STATX_DIOALIGN) {
stat->dio_mem_align = bdev_dma_alignment(bdev) + 1;
stat->dio_offset_align = bdev_logical_block_size(bdev);
Thanks,
Kuai
>
>
>
> The kernel config and materials to reproduce are available at:
> https://download.01.org/0day-ci/archive/20250820/202508200409.b2459c02-lkp@intel.com
>
>
>
Powered by blists - more mailing lists