lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8734abgxfl.fsf@igalia.com>
Date: Fri, 01 Aug 2025 11:15:26 +0100
From: Luis Henriques <luis@...lia.com>
To: "Darrick J. Wong" <djwong@...nel.org>
Cc: Theodore Ts'o <tytso@....edu>,  Miklos Szeredi <miklos@...redi.hu>,
  Bernd Schubert <bschubert@....com>,  linux-fsdevel@...r.kernel.org,
  linux-kernel@...r.kernel.org
Subject: Re: [RFC] Another take at restarting FUSE servers

On Thu, Jul 31 2025, Darrick J. Wong wrote:

> On Thu, Jul 31, 2025 at 09:04:58AM -0400, Theodore Ts'o wrote:
>> On Tue, Jul 29, 2025 at 04:38:54PM -0700, Darrick J. Wong wrote:
>> > 
>> > Just speaking for fuse2fs here -- that would be kinda nifty if libfuse
>> > could restart itself.  It's unclear if doing so will actually enable us
>> > to clear the condition that caused the failure in the first place, but I
>> > suppose fuse2fs /does/ have e2fsck -fy at hand.  So maybe restarts
>> > aren't totally crazy.
>> 
>> I'm trying to understand what the failure scenario is here.  Is this
>> if the userspace fuse server (i.e., fuse2fs) has crashed?  If so, what
>> is supposed to happen with respect to open files, metadata and data
>> modifications which were in transit, etc.?  Sure, fuse2fs could run
>> e2fsck -fy, but if there are dirty inode on the system, that's going
>> potentally to be out of sync, right?
>> 
>> What are the recovery semantics that we hope to be able to provide?
>
> <echoing what we said on the ext4 call this morning>
>
> With iomap, most of the dirty state is in the kernel, so I think the new
> fuse2fs instance would poke the kernel with FUSE_NOTIFY_RESTARTED, which
> would initiate GETATTR requests on all the cached inodes to validate
> that they still exist; and then resend all the unacknowledged requests
> that were pending at the time.  It might be the case that you have to
> that in the reverse order; I only know enough about the design of fuse
> to suspect that to be true.
>
> Anyhow once those are complete, I think we can resume operations with
> the surviving inodes.  The ones that fail the GETATTR revalidation are
> fuse_make_bad'd, which effectively revokes them.

Ah! Interesting, I have been playing a bit with sending LOOKUP requests,
but probably GETATTR is a better option.

So, are you currently working on any of this?  Are you implementing this
new NOTIFY_RESTARTED request?  I guess it's time for me to have a closer
look at fuse2fs too.

Cheers,
-- 
Luís

> All of this of course relies on fuse2fs maintaining as little volatile
> state of its own as possible.  I think that means disabling the block
> cache in the unix io manager, and if we ever implemented delalloc then
> either we'd have to save the reservations somewhere or I guess you could
> immediately syncfs the whole filesystem to try to push all the dirty
> data to disk before we start allowing new free space allocations for new
> changes.
>
> --D
>
>>      	     	      		     	     - Ted
>> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ