[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMZfGtX+GkVf_7D8G+Ss32+wYy1bcMgDpT5FJDA=a9gdjW36-w@mail.gmail.com>
Date: Tue, 16 Nov 2021 16:26:12 +0800
From: Muchun Song <songmuchun@...edance.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Alexey Dobriyan <adobriyan@...il.com>, gladkov.alexey@...il.com,
LKML <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH 0/4] remove PDE_DATA()
On Tue, Nov 16, 2021 at 1:09 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> On Mon, 1 Nov 2021 17:35:14 +0800 Muchun Song <songmuchun@...edance.com> wrote:
>
> > I found a bug [1] some days ago, which is because we want to use
> > inode->i_private to pass user private data. However, this is wrong
> > on proc fs. We provide a specific function PDE_DATA() to get user
> > private data. Actually, we can hide this detail by storing
> > PDE()->data into inode->i_private and removing PDE_DATA() completely.
> > The user could use inode->i_private to get user private data just
> > like debugfs does. This series is trying to remove PDE_DATA().
>
> Why can't we do
>
> /*
> * comment goes here
> */
> static inline void *PDE_DATA(struct inode *inode)
> {
> return inode->i_private;
> }
>
> to abstract things a bit and to reduce the patch size?
>
> otoh, that upper-case thing needs to go, so the patch size remains the
> same anyway.
>
> And perhaps we should have a short-term
>
> #define PDE_DATA(i) pde_data(i)
Right. This way is the easiest way to reduce the patch size.
Actually, I want to make all PDE_DATA() go away, which
makes this patch series go big.
>
> because new instances are sure to turn up during the development cycle.
>
> But I can handle that by staging the patch series after linux-next and
> reminding myself to grep for new PDE_DATA instances prior to
> upstreaming.
I'd be happy if you could replace PDE_DATA() with inode->i_private.
In this case, should I still introduce pde_data() and perform the above
things to make this series smaller?
Thanks.
Powered by blists - more mailing lists