lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a01ab479-69e8-9395-7d24-9de1eec28aff@acm.org>
Date:   Thu, 13 May 2021 08:59:10 -0700
From:   Bart Van Assche <bvanassche@....org>
To:     Theodore Ts'o <tytso@....edu>,
        Changheun Lee <nanich.lee@...sung.com>
Cc:     alex_y_xu@...oo.ca, axboe@...nel.dk, bgoncalv@...hat.com,
        dm-crypt@...ut.de, hch@....de, jaegeuk@...nel.org,
        linux-block@...r.kernel.org, linux-ext4@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org,
        ming.lei@...hat.com, yi.zhang@...hat.com
Subject: Re: regression: data corruption with ext4 on LUKS on nvme with
 torvalds master

On 5/13/21 7:15 AM, Theodore Ts'o wrote:
> On Thu, May 13, 2021 at 06:42:22PM +0900, Changheun Lee wrote:
>>
>> Problem might be casued by exhausting of memory. And memory exhausting
>> would be caused by setting of small bio_max_size. Actually it was not
>> reproduced in my VM environment at first. But, I reproduced same problem
>> when bio_max_size is set with 8KB forced. Too many bio allocation would
>> be occurred by setting of 8KB bio_max_size.
> 
> Hmm... I'm not sure how to align your diagnosis with the symptoms in
> the bug report.  If we were limited by memory, that should slow down
> the I/O, but we should still be making forward progress, no?  And a
> forced reboot should not result in data corruption, unless maybe there
> was a missing check for a failed memory allocation, causing data to be
> written to the wrong location, a missing error check leading to the
> block or file system layer not noticing that a write had failed
> (although again, memory exhaustion should not lead to failed writes;
> it might slow us down, sure, but if writes are being failed, something
> is Badly Going Wrong --- things like writes to the swap device or
> writes by the page cleaner must succeed, or else Things Would Go Bad
> In A Hurry).

After the LUKS data corruption issue was reported I decided to take a
look at the dm-crypt code. In that code I found the following:

static void clone_init(struct dm_crypt_io *io, struct bio *clone)
{
	struct crypt_config *cc = io->cc;

	clone->bi_private = io;
	clone->bi_end_io  = crypt_endio;
	bio_set_dev(clone, cc->dev->bdev);
	clone->bi_opf	  = io->base_bio->bi_opf;
}
[ ... ]
static struct bio *crypt_alloc_buffer(struct dm_crypt_io *io, unsigned size)
{
	[ ... ]
	clone = bio_alloc_bioset(GFP_NOIO, nr_iovecs, &cc->bs);
	[ ... ]
	clone_init(io, clone);
	[ ... ]
	for (i = 0; i < nr_iovecs; i++) {
		[ ... ]
		bio_add_page(clone, page, len, 0);

		remaining_size -= len;
	}
	[ ... ]
}

My interpretation is that crypt_alloc_buffer() allocates a bio,
associates it with the underlying device and clones a bio. The input bio
may have a size up to UINT_MAX while the new limit for the size of the
cloned bio is max_sectors * 512. That causes bio_add_page() to fail if
the input bio is larger than max_sectors * 512, hence the data
corruption. Please note that this is a guess only and that I'm not
familiar with the dm-crypt code.

Bart.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ