[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.11.1911211154090.1697@eggly.anvils>
Date: Thu, 21 Nov 2019 12:07:43 -0800 (PST)
From: Hugh Dickins <hughd@...gle.com>
To: "J. R. Okajima" <hooanon05g@...il.com>
cc: Hugh Dickins <hughd@...gle.com>,
"zhengbin (A)" <zhengbin13@...wei.com>,
Matthew Wilcox <willy@...radead.org>, viro@...iv.linux.org.uk,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
houtao1@...wei.com, yi.zhang@...wei.com
Subject: Re: [PATCH] tmpfs: use ida to get inode number
On Thu, 21 Nov 2019, J. R. Okajima wrote:
> Hugh Dickins:
> > Internally (in Google) we do rely on good tmpfs inode numbers more
> > than on those of other get_next_ino() filesystems, and carry a patch
> > to mm/shmem.c for it to use 64-bit inode numbers (and separate inode
> > number space for each superblock) - essentially,
> >
> > =09ino =3D sbinfo->next_ino++;
> > =09/* Avoid 0 in the low 32 bits: might appear deleted */
> > =09if (unlikely((unsigned int)ino =3D=3D 0))
> > =09=09ino =3D sbinfo->next_ino++;
>
> I agree with that "per superblock inum space", but I don't see your
> point. How can you manage it fully? I mean how can you decide whether
> the new inum is in use or not?
> For example,
> - you create a file which is assigned inum#10.
> - you or other people create and unlink over and over on the same tmpfs.
> - then sbinfo->next_ino will become zero, skipped, ok.
> - and then it will be 10.
> I don't think you want to share the same inum by two inodes.
64 bits. I haven't done the arithmetic to work out the amusing number,
but zhengbin mentioned the script taking 10 days to duplicate an inode
number in 32 bits, so: a larger number of years than I need to care about.
>
> Moreover, SysV SHM uses tmpfs and shmget(2) overwrite inum internally.
> It will be another seed of a similar problem.
I was totally ignorant of that peculiarity in ipc/shm.c, thanks for
alerting me to it. But it doesn't affect what we're doing in tmpfs,
and apparently suits the users of SysV SHM: I don't see any need to
worry about it.
Hugh
Powered by blists - more mailing lists