lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241022150149.GA27397@willie-the-truck>
Date: Tue, 22 Oct 2024 16:01:49 +0100
From: Will Deacon <will@...nel.org>
To: asmadeus@...ewreck.org
Cc: Thorsten Leemhuis <regressions@...mhuis.info>, ericvh@...nel.org,
	lucho@...kov.net, Christian Brauner <brauner@...nel.org>,
	Alexander Viro <viro@...iv.linux.org.uk>, oss@...debyte.com,
	v9fs@...ts.linux.dev, linux-kernel@...r.kernel.org, oleg@...hat.com,
	keirf@...gle.com, regressions@...ts.linux.dev
Subject: Re: VFS regression with 9pfs ("Lookup would have caused loop")

Hi Dominique,

On Wed, Oct 16, 2024 at 06:39:28PM +0900, asmadeus@...ewreck.org wrote:
> Thorsten Leemhuis wrote on Tue, Oct 15, 2024 at 08:07:10PM +0200:
> > Thx for bringing this to my attention. I had hoped that Eric might reply
> > and waited a bit, but that did not happen. I kind of expected that, as
> > he seems to be  somewhat afk, as the last mail from him on lore is from
> > mid-September; and in the weeks before that he did not post much either.
> > Hmmm. :-/
> 
> Right, I had hoped he'd find time to look further into this and kept my
> head in the ground, but it looks like we'll have to handle this somehow...
> 
> One note though he did sent a patch that seems related and wasn't sent
> for merge:
> https://lore.kernel.org/all/CAFkjPTn7JAbmKYASaeBNVpumOncPaReiPbc4Ph6ik3nNf8UTNg@mail.gmail.com/T/#u
>
> Will, perhaps you can try it? I'm pretty sure the setup to reproduce
> this is easy enough that I'll be able to reproduce in less than an hour
> (export two tmpfs [sequential inode number fs] wthin the same 9p mount
> in qemu without 'multidevs=remap'), but I don't even have that time
> right now.
> 
> (I didn't even read the patch properly and it might not help at all,
> sorry in this case)

I think this patch landed upsteam as d05dcfdf5e16 (" fs/9p: mitigate
inode collisions") and so I can confirm that it doesn't help with the
issue.

> > CCed Christian and Al, maybe they might be able to help directly or
> > indirectly somehow. If not, we likely need to get Linus involved to
> > decide if we want to at least temporarily revert the changes you mentioned.
> 
> I'm not sure this really needs to get Linus involved - it's breaking a
> server that used to work even if qemu has been printing a warning about
> these duplicate qid.path for a while, and the server really is the
> better place to remap these inodes as we have no idea of the underlying
> device id as far as I know...

FWIW, I'm not using QEMU at all. This is with kvmtool which, for better
or worse, prints no such diagnostic and used to be reliable enough with
whatever magic the kernel had prior to v6.9.

> So the question really just is do we have or can we build a workable, so
> the question is can we resonable do any better, or do we just want to
> live wth the old behaviour.
> (Note that as far as I understand the old code isn't 100% "loop" proof
> either anyway, a open(O_CREAT)/mkdir/mknod could happen to get identical
> inode numbers as well, it's just less likely so folks haven't been
> hitting it)

I'm happy to test patches if there's anything available, but otherwise
the reverts at least get us back to the old behaviour if nobody has time
to come up with something better.

Cheers,

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ