[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOQ4uxhC6L6whNyc6bs99ZcMRxMOt5xNR0HMKmJ8w1thXgO+zw@mail.gmail.com>
Date: Sat, 4 Jan 2020 23:16:04 +0200
From: Amir Goldstein <amir73il@...il.com>
To: Chris Down <chris@...isdown.name>
Cc: linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Al Viro <viro@...iv.linux.org.uk>,
Matthew Wilcox <willy@...radead.org>,
Jeff Layton <jlayton@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Tejun Heo <tj@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>, kernel-team@...com,
Hugh Dickins <hughd@...gle.com>,
"zhengbin (A)" <zhengbin13@...wei.com>,
Linux MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v3 0/2] fs: inode: shmem: Reduce risk of inum overflow
On Fri, Jan 3, 2020 at 7:30 PM Chris Down <chris@...isdown.name> wrote:
>
> In Facebook production we are seeing heavy i_ino wraparounds on tmpfs.
> On affected tiers, in excess of 10% of hosts show multiple files with
> different content and the same inode number, with some servers even
> having as many as 150 duplicated inode numbers with differing file
> content.
>
> This causes actual, tangible problems in production. For example, we
> have complaints from those working on remote caches that their
> application is reporting cache corruptions because it uses (device,
> inodenum) to establish the identity of a particular cache object, but
> because it's not unique any more, the application refuses to continue
> and reports cache corruption. Even worse, sometimes applications may not
> even detect the corruption but may continue anyway, causing phantom and
> hard to debug behaviour.
>
> In general, userspace applications expect that (device, inodenum) should
> be enough to be uniquely point to one inode, which seems fair enough.
> One might also need to check the generation, but in this case:
>
> 1. That's not currently exposed to userspace
> (ioctl(...FS_IOC_GETVERSION...) returns ENOTTY on tmpfs);
> 2. Even with generation, there shouldn't be two live inodes with the
> same inode number on one device.
>
> In order to mitigate this, we take a two-pronged approach:
>
> 1. Moving inum generation from being global to per-sb for tmpfs. This
> itself allows some reduction in i_ino churn. This works on both 64-
> and 32- bit machines.
> 2. Adding inode{64,32} for tmpfs. This fix is supported on machines with
> 64-bit ino_t only: we allow users to mount tmpfs with a new inode64
> option that uses the full width of ino_t, or CONFIG_TMPFS_INODE64.
>
> Chris Down (2):
> tmpfs: Add per-superblock i_ino support
> tmpfs: Support 64-bit inums per-sb
>
> Documentation/filesystems/tmpfs.txt | 11 ++++
> fs/Kconfig | 15 +++++
> include/linux/shmem_fs.h | 2 +
> mm/shmem.c | 97 ++++++++++++++++++++++++++++-
> 4 files changed, 124 insertions(+), 1 deletion(-)
>
CC tmpfs maintainer, linux-mm and Andrew Morton, who is the one sending
most of the tmpfs patches to Linus.
Also worth mentioning these previous attempts by zhengbin, which was trying to
address the same problem without the per-sb ino counter approach:
https://patchwork.kernel.org/patch/11254001/
https://patchwork.kernel.org/patch/11023915/
Thanks,
Amir.
Powered by blists - more mailing lists