lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200421042039.BF8074C046@d06av22.portsmouth.uk.ibm.com>
Date:   Tue, 21 Apr 2020 09:50:38 +0530
From:   Ritesh Harjani <riteshh@...ux.ibm.com>
To:     bugzilla-daemon@...zilla.kernel.org, linux-ext4@...r.kernel.org,
        "Theodore Ts'o" <tytso@....edu>, Jan Kara <jack@...e.cz>
Cc:     "Darrick J. Wong" <darrick.wong@...cle.com>,
        linux-fsdevel@...r.kernel.org
Subject: Re: [Bug 207367] Accraid / aptec / Microsemi / ext4 / larger then
 16TB

Hello All,

On 4/21/20 5:21 AM, bugzilla-daemon@...zilla.kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=207367
> 
> --- Comment #3 from Christian Kujau (lists@...dbynature.de) ---
> On Mon, 20 Apr 2020, bugzilla-daemon@...zilla.kernel.org wrote:
>> with kernel 5.7 only volumes under 16TB can be mount.
> 
> While this bug report is still missing details, I was able to reproduce
> this issue. Contrary to the subject line, it is not hardware related at
> all.
> 
> Linux 5.5 (Debian), creating a 17 TB sparse device (4 GB backing device):
> 
>   $ echo "0 36507222016 zero" | dmsetup create zero0
>   $ echo "0 36507222016 snapshot /dev/mapper/zero0 /dev/vdb p 128" | \
>     dmsetup create sparse0
> 
>   $ mkfs.ext4 -F /dev/mapper/sparse0
>   Creating filesystem with 4563402752 4k blocks and 285212672 inodes
>   Creating journal (262144 blocks): done
> 
>   $ mount -t ext4 /dev/mapper/sparse0 /mnt/disk/
>   $ df -h /mnt/disk/
>   Filesystem      Size  Used Avail Use% Mounted on
>   /dev/mapper/sparse0   17T   24K   17T   1% /mnt/disk
> 
> 
> The same fails on 5.7-rc2 (vanilla) with:
> 
> 
> ------------[ cut here ]------------
> would truncate bmap result
> WARNING: CPU: 0 PID: 640 at fs/iomap/fiemap.c:121
> iomap_bmap_actor+0x3a/0x40

Sorry about not seeing this through in the first place.

So the problem really is that the iomap_bmap() API
gives WARNING and does't return the physical block address in case
if the addr is > INT_MAX. (I guess this could be mostly since
the ioctl_fibmap() passes a user integer pointer and users of
iomap_bmap() may mostly be coming from ioctl path till now).

FYI - I do see that bmap() is also used by below APIs/subsystem.
Not sure if any of subsystems mentioned below may still fail later
if the underlying FS moved to iomap_bmap() interface or for
any existing callers of iomap_bmap() :-

1. mm/page-io.c (generic_swapfile_activate() func)
2. fs/cachefiles/rdwr.c
3. fs/ecryptfs/mmap.c
4. fs/jbd2/journal.c


But the changes done in ext4 to move to iomap_bmap() interface
resulted in this issue since jbd2 tries to find the block mapping
of on disk journal inode of ext4 and on a larger filesystem
this may fail given the design of iomap_bmap() to not
return addr if > INT_MAX.

So as I see it there are 3 options from here. Wanted to put this
on mailing list for discussion.

1. Make changes in iomap_bmap() to return the block address mapping.
But I still would like to mention that iomap designers may not agree
with this here Since the direction in general is to get rid of bmap()
interface anyways.

2. Revert the patch series of "bmap & fiemap to move to iomap interface"
(why fiemap too? - since if we decide to revert bmap anyways,
then we better fix the performance numbers report too coming from
fiemap. Also due to 3rd option below since if iomap_bmap() is
not changed, then we better keep both of this interface as is until
we get the solution like 3 below.)

3. To move to a new internal API like fiemap. But we need to change
fiemap in a way that it should also be allowed to used by internal
kernel APIs. Since as of now fiemap_extent struct is assumed to be
a user pointer.

(But the 3rd option as I see, won't be possible given the time frame to
fix this issue. Also note if we decide to revert the changes, then the
long term path would be to work on making fiemap used by internal kernel
APIs too).

struct fiemap_extent_info {
	unsigned int fi_flags;		/* Flags as passed from user */
	unsigned int fi_extents_mapped;	/* Number of mapped extents */
	unsigned int fi_extents_max;	/* Size of fiemap_extent array */
	struct fiemap_extent __user *fi_extents_start; /* Start of
							fiemap_extent array */
};


-ritesh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ