lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e73a1e0c90ebce33c23f8f7746c23c1199f62a78@linux.dev>
Date: Wed, 10 Apr 2024 18:57:36 +0000
From: "Eric Van Hensbergen" <eric.vanhensbergen@...ux.dev>
To: "Oleg Nesterov" <oleg@...hat.com>
Cc: v9fs@...ts.linux.dev, linux-kernel@...r.kernel.org,
 kent.overstreet@...ux.dev
Subject: Re: [GIT PULL] fs/9p patches for 6.9 merge window

April 10, 2024 at 12:20 PM, "Eric Van Hensbergen" <eric.vanhensbergen@...ux.dev> wrote:
> April 8, 2024 at 9:14 AM, "Oleg Nesterov" <oleg@...hat.com> wrote:
> > Hello,
> >  the commit 724a08450f74 ("fs/9p: simplify iget to remove unnecessary paths")
> >  from this PR breaks my setup.
> > 
> 
> Thanks for the bisect and detailed reproduction instructions. I am looking at this now and it seems to be related to another problem reported by Kent Overstreet where he was seeing symlink loop reports that were disrupting his testing environment. Once I'm able to reproduce, I'll try and get a patch out later today, if not I may revert the commit to keep from disrupting folks' testing environments.
> 

Okay, I think I understand this one, unfortunately it doesn't appear obviously related to Kent's report if what I believe is correct.  I think I've reproduced the problem, fundamentally, since you have two mount points you are exporting together. I believe we are getting an inode number collision which was being hidden by the "always create a new inode on lookup" inefficiency in v9fs_vfs_lookup.  You could probably verify that for me by stating the /home directory and the / directory on the server side of your setup.  When I created two partitions and mounted one inside the other the "home" and the "root" both had inode 2 and I got -ELOOP when trying to access.

The underlying cause in the patch series is that I was trying to maintain inodes versus constantly creating and deleting them -- and I simplified the inode lookup to be based purely on inode number (versus checking against times, type, i_generation, etc.) -- so collisions are much more likely to happen.  If qemu detects that this is a possibility it usually prints something: 
qemu-system-aarch64: warning: 9p: Multiple devices detected in same VirtFS export, which might lead to file ID collisions and severe misbehaviours on guest! You should either use a separate export for each device shared from host or use virtfs option 'multidevs=remap'!

I can confirm that multidevs=remap in qemu does appear to avoid the problem, but this doesn't help the fact that we have a broader regression on our hand that used to work.  It'd be useful to see if mutlidevs=remap also deals with your problem @Kent, but while I think its the same underlying problem, I don't think it may be the same solution.

I think I have to give up on relying on qid.path/inode-number as unique and maintain our own client view of those -- but this will cause problems with the way several parts of the code currently operate where we need to lookup inode by a unique identifier from the server (which is currently conveyed by qids).

Now that I understand the problem, I think I can work thorugh a partial revert which will go back to a more complex match in vfs_lookup (which examines several other fields from the server beyond qid.path) -- but maybe I can do it in such a way which will still avoid keeping duplicate inode structs around in memory.

This didn't show up in my regressions because I always export a single file system where the inodes are always unique.

Thanks for your patience while I work through this - now that i can reproduce the problem, I'm fairly certain I can get a patch together this week to test to see if it solves the regressions.

     -eric

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ