[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <14e2378f-1d0f-e4b6-d0e2-eb19f1b12c6c@windriver.com>
Date: Fri, 15 May 2020 00:10:16 +0800
From: "Xu, Yanfei" <yanfei.xu@...driver.com>
To: "Darrick J. Wong" <darrick.wong@...cle.com>,
Jens Axboe <axboe@...nel.dk>
Cc: LKML <linux-kernel@...r.kernel.org>, linux-block@...r.kernel.org,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: BUG:loop:blk_update_request: I/O error, dev loop6, sector 49674
op 0x9:(WRITE_ZEROES)
On 5/14/20 12:41 AM, Darrick J. Wong wrote:
> [add fsdevel to cc]
>
> On Tue, May 12, 2020 at 08:22:08PM -0600, Jens Axboe wrote:
>> On 5/12/20 8:14 PM, Xu, Yanfei wrote:
>>> Hi,
>>>
>>> After operating the /dev/loop which losetup with an image placed in**tmpfs,
>>>
>>> I got the following ERROR messages:
>>>
>>> ----------------[cut here]---------------------
>>>
>>> [ 183.110770] blk_update_request: I/O error, dev loop6, sector 524160 op 0x9:(WRITE_ZEROES) flags 0x1000800 phys_seg 0 prio class 0
>>> [ 183.123949] blk_update_request: I/O error, dev loop6, sector 522 op 0x9:(WRITE_ZEROES) flags 0x1000800 phys_seg 0 prio class 0
>>> [ 183.137123] blk_update_request: I/O error, dev loop6, sector 16906 op 0x9:(WRITE_ZEROES) flags 0x1000800 phys_seg 0 prio class 0
>>> [ 183.150314] blk_update_request: I/O error, dev loop6, sector 32774 op 0x9:(WRITE_ZEROES) flags 0x1000800 phys_seg 0 prio class 0
>>> [ 183.163551] blk_update_request: I/O error, dev loop6, sector 49674 op 0x9:(WRITE_ZEROES) flags 0x1000800 phys_seg 0 prio class 0
>>> [ 183.176824] blk_update_request: I/O error, dev loop6, sector 65542 op 0x9:(WRITE_ZEROES) flags 0x1000800 phys_seg 0 prio class 0
>>> [ 183.190029] blk_update_request: I/O error, dev loop6, sector 82442 op 0x9:(WRITE_ZEROES) flags 0x1000800 phys_seg 0 prio class 0
>>> [ 183.203281] blk_update_request: I/O error, dev loop6, sector 98310 op 0x9:(WRITE_ZEROES) flags 0x1000800 phys_seg 0 prio class 0
>>> [ 183.216531] blk_update_request: I/O error, dev loop6, sector 115210 op 0x9:(WRITE_ZEROES) flags 0x1000800 phys_seg 0 prio class 0
>>> [ 183.229914] blk_update_request: I/O error, dev loop6, sector 131078 op 0x9:(WRITE_ZEROES) flags 0x1000800 phys_seg 0 prio class 0
>>>
>>>
>>> I have found the commit which introduce this issue by git bisect :
>>>
>>> commit :efcfec57[loop: fix no-unmap write-zeroes request behavior]
>>
>> Please CC the author of that commit too. Leaving the rest quoted below.
>>
>>> Kernrel version: Linux version 5.6.0
>>>
>>> Frequency: everytime
>>>
>>> steps to reproduce:
>>>
>>> 1.git clone mainline kernel
>>>
>>> 2.compile kernel with ARCH=x86_64, and then boot the system with it
>>>
>>> (seems other arch also can reproduce it )
>>>
>>> 3.make an image by "dd of=/tmp/image if=/dev/zero bs=1M count=256"
>>>
>>> *4.**place the image in tmpfs directory*
>>>
>>> 5.losetup /dev/loop6 /PATH/TO/image
>>>
>>> 6.mkfs.ext2 /dev/loop6
>>>
>>>
>>> Any comments will be appreciated.
>
> Hm, you got IO failures here because shmem_fallocate doesn't support
> FL_ZERO_RANGE range. That might not be too hard to add, but there's a
> broader problem of detecting fallocate support--
>
> The loop driver assumes that if the file has an fallocate method then
> it's safe to set max_discard_sectors (and now max_write_zeroes_sectors)
> to UINT_MAX>>9. There's currently no good way to detect which modes are
> supported by a filesystem's ->fallocate function, or to discover the
> required granularity.
>
> Right now we tell application developers that the way to discover the
> conditions under which fallocate will work is to try it and see if they
> get EOPNOTSUPP.
>
> One way to "fix" this would be to fix lo_fallocate to set RQF_QUIET if
> the filesystem returns EOPNOTSUPP, which gets rid of the log messages.
> We probably ought to zero out the appropriate max_*_sectors if we get
> EOPNOTSUPP.
Many thanks for your detailed reply:) No good method for detecting
fallocte support is a real problem. And the way to "fix" you mentioned
do is a good workaround for the current satuation.
Best regards,
Yanfei
>
> --D
>
>>>
>>>
>>> Thanks,
>>>
>>> Yanfei
>>>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> Jens Axboe
>>
Powered by blists - more mailing lists