[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOQ4uxhYY9Ep1ncpU+E3bWg4ZpR8pjvLJMA5vj+7frEJ2KTwsg@mail.gmail.com>
Date: Fri, 20 Dec 2019 19:35:38 +0200
From: Amir Goldstein <amir73il@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Chris Down <chris@...isdown.name>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Al Viro <viro@...iv.linux.org.uk>,
Jeff Layton <jlayton@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Tejun Heo <tj@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>, kernel-team@...com,
Hugh Dickins <hughd@...gle.com>,
Miklos Szeredi <miklos@...redi.hu>,
"zhengbin (A)" <zhengbin13@...wei.com>
Subject: Re: [PATCH] fs: inode: Reduce volatile inode wraparound risk when
ino_t is 64 bit
On Fri, Dec 20, 2019 at 6:46 PM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Fri, Dec 20, 2019 at 03:41:11PM +0200, Amir Goldstein wrote:
> > Suggestion:
> > 1. Extend the kmem_cache API to let the ctor() know if it is
> > initializing an object
> > for the first time (new page) or recycling an object.
>
> Uh, what? The ctor is _only_ called when new pages are allocated.
> Part of the contract with the slab user is that objects are returned to
> the slab in an initialised state.
Right. I mixed up the ctor() with alloc_inode().
So is there anything stopping us from reusing an existing non-zero
value of i_ino in shmem_get_inode()? for recycling shmem ino
numbers?
Thanks,
Amir.
Powered by blists - more mailing lists