lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1620571445.2k94orj8ee.none@localhost>
Date:   Sun, 09 May 2021 10:47:26 -0400
From:   "Alex Xu (Hello71)" <alex_y_xu@...oo.ca>
To:     Jens Axboe <axboe@...nel.dk>, bgoncalv@...hat.com,
        bvanassche@....org, dm-crypt@...ut.de, hch@....de,
        jaegeuk@...nel.org, linux-block@...r.kernel.org,
        linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-nvme@...ts.infradead.org, ming.lei@...hat.com,
        Changheun Lee <nanich.lee@...sung.com>, yi.zhang@...hat.com
Subject: Re: regression: data corruption with ext4 on LUKS on nvme with
 torvalds master

Excerpts from Jens Axboe's message of May 8, 2021 11:51 pm:
> On 5/8/21 8:29 PM, Alex Xu (Hello71) wrote:
>> Excerpts from Alex Xu (Hello71)'s message of May 8, 2021 1:54 pm:
>>> Hi all,
>>>
>>> Using torvalds master, I recently encountered data corruption on my ext4 
>>> volume on LUKS on NVMe. Specifically, during heavy writes, the system 
>>> partially hangs; SysRq-W shows that processes are blocked in the kernel 
>>> on I/O. After forcibly rebooting, chunks of files are replaced with 
>>> other, unrelated data. I'm not sure exactly what the data is; some of it 
>>> is unknown binary data, but in at least one case, a list of file paths 
>>> was inserted into a file, indicating that the data is misdirected after 
>>> encryption.
>>>
>>> This issue appears to affect files receiving writes in the temporal 
>>> vicinity of the hang, but affects both new and old data: for example, my 
>>> shell history file was corrupted up to many months before.
>>>
>>> The drive reports no SMART issues.
>>>
>>> I believe this is a regression in the kernel related to something merged 
>>> in the last few days, as it consistently occurs with my most recent 
>>> kernel versions, but disappears when reverting to an older kernel.
>>>
>>> I haven't investigated further, such as by bisecting. I hope this is 
>>> sufficient information to give someone a lead on the issue, and if it is 
>>> a bug, nail it down before anybody else loses data.
>>>
>>> Regards,
>>> Alex.
>>>
>> 
>> I found the following test to reproduce a hang, which I guess may be the 
>> cause:
>> 
>> host$ cd /tmp
>> host$ truncate -s 10G drive
>> host$ qemu-system-x86_64 -drive format=raw,file=drive,if=none,id=drive -device nvme,drive=drive,serial=1 [... more VM setup options]
>> guest$ cryptsetup luksFormat /dev/nvme0n1
>> [accept warning, use any password]
>> guest$ cryptsetup open /dev/nvme0n1
>> [enter password]
>> guest$ mkfs.ext4 /dev/mapper/test
>> [normal output...]
>> Creating journal (16384 blocks): [hangs forever]
>> 
>> I bisected this issue to:
>> 
>> cd2c7545ae1beac3b6aae033c7f31193b3255946 is the first bad commit
>> commit cd2c7545ae1beac3b6aae033c7f31193b3255946
>> Author: Changheun Lee <nanich.lee@...sung.com>
>> Date:   Mon May 3 18:52:03 2021 +0900
>> 
>>     bio: limit bio max size
>> 
>> I didn't try reverting this commit or further reducing the test case. 
>> Let me know if you need my kernel config or other information.
> 
> If you have time, please do test with that reverted. I'd be anxious to
> get this revert queued up for 5.13-rc1.
> 
> -- 
> Jens Axboe
> 
> 

I tested reverting it on top of b741596468b010af2846b75f5e75a842ce344a6e 
("Merge tag 'riscv-for-linus-5.13-mw1' of 
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux"), causing it 
to no longer hang. I didn't check if this fixes the data corruption, but 
I assume so.

I also tested a 1 GB image (works either way), and a virtio-blk 
interface (works either way)

The Show Blocked State from the VM (without revert):

sysrq: Show Blocked State
task:kworker/u2:0    state:D stack:    0 pid:    7 ppid:     2 flags:0x00004000
Workqueue: kcryptd/252:0 kcryptd_crypt
Call Trace:
 __schedule+0x1a2/0x4f0
 schedule+0x63/0xe0
 schedule_timeout+0x6a/0xd0
 ? lock_timer_base+0x80/0x80
 io_schedule_timeout+0x4c/0x70
 mempool_alloc+0xfc/0x130
 ? __wake_up_common_lock+0x90/0x90
 kcryptd_crypt+0x291/0x4e0
 process_one_work+0x1b1/0x300
 worker_thread+0x48/0x3d0
 ? process_one_work+0x300/0x300
 kthread+0x129/0x150
 ? __kthread_create_worker+0x100/0x100
 ret_from_fork+0x22/0x30
task:mkfs.ext4       state:D stack:    0 pid:  979 ppid:   964 flags:0x00004000
Call Trace:
 __schedule+0x1a2/0x4f0
 ? __schedule+0x1aa/0x4f0
 schedule+0x63/0xe0
 schedule_timeout+0x99/0xd0
 io_schedule_timeout+0x4c/0x70
 wait_for_completion_io+0x74/0xc0
 submit_bio_wait+0x46/0x60
 blkdev_issue_zeroout+0x118/0x1f0
 blkdev_fallocate+0x125/0x180
 vfs_fallocate+0x126/0x2e0
 __x64_sys_fallocate+0x37/0x60
 do_syscall_64+0x61/0x80
 ? do_syscall_64+0x6e/0x80
 entry_SYSCALL_64_after_hwframe+0x44/0xae

Regards,
Alex.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ