lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 12 Sep 2022 15:20:46 +0200 From: Florian Weimer <fweimer@...hat.com> To: Jeff Layton <jlayton@...nel.org> Cc: "J. Bruce Fields" <bfields@...ldses.org>, Theodore Ts'o <tytso@....edu>, Jan Kara <jack@...e.cz>, NeilBrown <neilb@...e.de>, adilger.kernel@...ger.ca, djwong@...nel.org, david@...morbit.com, trondmy@...merspace.com, viro@...iv.linux.org.uk, zohar@...ux.ibm.com, xiubli@...hat.com, chuck.lever@...cle.com, lczerner@...hat.com, brauner@...nel.org, linux-man@...r.kernel.org, linux-api@...r.kernel.org, linux-btrfs@...r.kernel.org, linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org, ceph-devel@...r.kernel.org, linux-ext4@...r.kernel.org, linux-nfs@...r.kernel.org, linux-xfs@...r.kernel.org Subject: Re: [man-pages RFC PATCH v4] statx, inode: document the new STATX_INO_VERSION field * Jeff Layton: > On Mon, 2022-09-12 at 14:13 +0200, Florian Weimer wrote: >> * Jeff Layton: >> >> > To do this we'd need 2 64-bit fields in the on-disk and in-memory >> > superblocks for ext4, xfs and btrfs. On the first mount after a crash, >> > the filesystem would need to bump s_version_max by the significant >> > increment (2^40 bits or whatever). On a "clean" mount, it wouldn't need >> > to do that. >> > >> > Would there be a way to ensure that the new s_version_max value has made >> > it to disk? Bumping it by a large value and hoping for the best might be >> > ok for most cases, but there are always outliers, so it might be >> > worthwhile to make an i_version increment wait on that if necessary. >> >> How common are unclean shutdowns in practice? Do ex64/XFS/btrfs keep >> counters in the superblocks for journal replays that can be read easily? >> >> Several useful i_version applications could be negatively impacted by >> frequent i_version invalidation. >> > > One would hope "not very often", but Oopses _are_ something that happens > occasionally, even in very stable environments, and it would be best if > what we're building can cope with them. I was wondering if such unclean shutdown events are associated with SSD “unsafe shutdowns”, as identified by the SMART counter. I think those aren't necessarily restricted to oopses or various forms of powerless (maybe depending on file system/devicemapper configuration)? I admit it's possible that the file system is shut down cleanly before the kernel requests the power-off state from the firmware, but the underlying SSD is not. Thanks, Florian
Powered by blists - more mailing lists