[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <15767273.MGizftpLG7@silver>
Date: Thu, 16 Jun 2022 22:14:16 +0200
From: Christian Schoenebeck <linux_oss@...debyte.com>
To: Dominique Martinet <asmadeus@...ewreck.org>
Cc: Eric Van Hensbergen <ericvh@...il.com>,
Latchesar Ionkov <lucho@...kov.net>,
David Howells <dhowells@...hat.com>,
linux-fsdevel@...r.kernel.org, stable@...r.kernel.org,
v9fs-developer@...ts.sourceforge.net, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] 9p: fix EBADF errors in cached mode
On Donnerstag, 16. Juni 2022 15:51:31 CEST Dominique Martinet wrote:
> Christian Schoenebeck wrote on Thu, Jun 16, 2022 at 03:35:59PM +0200:
> > 2. I fixed the conflict and gave your patch a test spin, and it triggers
> > the BUG_ON(!fid); that you added with that patch. Backtrace based on
>
> > 30306f6194ca ("Merge tag 'hardening-v5.19-rc3' ..."):
> hm, that's probably the version I sent without the fallback to
> private_data fid if writeback fid was sent (I've only commented without
> sending a v2)
Right, I forgot that you queued another version, sorry. With your already
queued patch (today's v2) that's fine now.
On Donnerstag, 16. Juni 2022 16:11:16 CEST Dominique Martinet wrote:
> Dominique Martinet wrote on Thu, Jun 16, 2022 at 10:51:31PM +0900:
> > > Did your patch work there for you? I mean I have not applied the other
> > > pending 9p patches, but they should not really make difference, right?
> > > I won't have time today, but I will continue to look at it tomorrow. If
> > > you already had some thoughts on this, that would be great of course.
> >
> > Yes, my version passes basic tests at least, and I could no longer
> > reproduce the problem.
>
> For what it's worth I've also tested a version of your patch:
>
> -----
> diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c
> index a8f512b44a85..d0833fa69faf 100644
> --- a/fs/9p/vfs_addr.c
> +++ b/fs/9p/vfs_addr.c
> @@ -58,8 +58,21 @@ static void v9fs_issue_read(struct netfs_io_subrequest
> *subreq) */
> static int v9fs_init_request(struct netfs_io_request *rreq, struct file
> *file) {
> + struct inode *inode = file_inode(file);
> + struct v9fs_inode *v9inode = V9FS_I(inode);
> struct p9_fid *fid = file->private_data;
>
> + BUG_ON(!fid);
> +
> + /* we might need to read from a fid that was opened write-only
> + * for read-modify-write of page cache, use the writeback fid
> + * for that */
> + if (rreq->origin == NETFS_READ_FOR_WRITE &&
> + (fid->mode & O_ACCMODE) == O_WRONLY) {
> + fid = v9inode->writeback_fid;
> + BUG_ON(!fid);
> + }
> +
> refcount_inc(&fid->count);
> rreq->netfs_priv = fid;
> return 0;
> -----
>
> And this also seems to work alright.
>
> I was about to ask why the original code did writes with the writeback
> fid, but I'm noticing now the current code still does (through
> v9fs_vfs_write_folio_locked()), so that part hasn't changed from the old
> code, and init_request will only be getting reads? Which actually makes
> sense now I'm thinking about it because I recall David saying he's
> working on netfs writes now...
>
> So that minimal version is probably what we want, give or take style
> adjustments (only initializing inode/v9inode in the if case or not) -- I
> sure hope compilers optimizes it away when not needed.
>
>
> I'll let you test one or both versions and will fixup the commit message
> again/credit you/resend if we go with this version, unless you want to
> send it.
>
> --
> Dominique
I tested all 3 variants today, and they were all behaving correctly (no EBADF
errors anymore, no other side effects observed).
The minimalistic version (i.e. your initial suggestion) performed 20% slower
in my tests, but that could be due to the fact that it was simply the 1st
version I tested, so caching on host side might be the reason. If necessary I
can check the performance aspect more thoroughly.
Personally I would at least use the NETFS_READ_FOR_WRITE version, but that's
up to you. On doubt, clarify with David's plans.
Feel free to add my RB and TB tags to any of the 3 version(s) you end up
queuing:
Reviewed-by: Christian Schoenebeck <linux_oss@...debyte.com>
Tested-by: Christian Schoenebeck <linux_oss@...debyte.com>
Best regards,
Christian Schoenebeck
Powered by blists - more mailing lists