lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 18 Jul 2009 10:08:10 +1000
From:	Neil Brown <neilb@...e.de>
To:	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Cc:	linux-raid@...r.kernel.org, dm-devel@...hat.com
Subject: How to handle >16TB devices on 32 bit hosts ??


Hi,
 It has recently come to by attention that Linux on a 32 bit host does
 not handle devices beyond 16TB particularly well.

 In particular, any access that goes through the page cache for the
 block device is limited to a pgoff_t number of pages.
 As pgoff_t is "unsigned long" and hence 32bit, and as page size is
 4096, this comes to 16TB total.

 A filesystem created on a 17TB device should be able to access and
 cache file data perfectly providing CONFIG_LBDAF is set.
 However if the filesystem caches metadata using the block device,
 then metadata beyond 16TB will be a problem.

 Access to the block device (/dev/whatever) via open/read/write will
 also cause problems beyond 16TB, though if O_DIRECT is used I think
 it should work OK (it will probably try to flushed out completely
 irrelevant parts of the page cache before allowing the IO, but that
 is a benign error case I think).

 With 2TB drives easily available, more people will probably try
 building arrays this big and we cannot just assume they will only do
 it on 64bit hosts.

 So the question I wanted to ask really is:  is there any point in
 allowing >16TB arrays to be created on 32bit hosts, or should we just
 disallow them?  If we allow them, what steps should we take to make
 the possible failure modes more obvious?

 As I said, I think O_DIRECT largely works fine on these devices and
 we could fix the few irregularities with little effort.  So one step
 might be to make mkfs/fsck utilities use O_DIRECT on >16TB devices on
 32bit hosts.

 Given that non-O_DIRECT can fail (e.g. in do_generic_file_read,
       index = *ppos >> PAGE_CACHE_SHIFT
 will lose data if *ppos is beyond 44 bits) we should probably fail
 opens on devices larger than 16TB.... though just failing the open
 doesn't help if the device can change size, as dm and md devices can.

 I believe ext[234] uses the block device's page cache for metadata, so
 they cannot safely be used with >16TB devices on 32bit.  Is that
 correct?  Should they fail a mount attempt? Do they?

 Are there any filesystems that do not use the block device cache and
 so are not limited to 16TB on 32bit?

 Even if no filesystem can use >16TB on 32bit, I suspect dm can
 usefully use such a device for logical volume management, and as long
 as each logical volume does not exceed 16TB, all should be happy.  So
 completely disallowing them might not be best.

 I suppose we could add a CONFIG option to make pgoff_t be 
 "unsigned long long".  Would the cost/benefit of that be acceptable?

 Your thoughts are most welcome.

Thanks,
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ