lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9f79a2b7-c3f4-9c42-e6f3-f3c77f75afa2@fastmail.fm>
Date:   Wed, 7 Jun 2023 10:11:49 +0200
From:   Bernd Schubert <bernd.schubert@...tmail.fm>
To:     Miklos Szeredi <miklos@...redi.hu>
Cc:     Askar Safin <safinaskar@...il.com>,
        Luis Chamberlain <mcgrof@...nel.org>,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-pm@...r.kernel.org,
        fuse-devel <fuse-devel@...ts.sourceforge.net>
Subject: Re: [PATCH 0/6] vfs: provide automatic kernel freeze / resume

On 6/7/23 09:21, Miklos Szeredi wrote:
> On Tue, 6 Jun 2023 at 22:18, Bernd Schubert <bernd.schubert@...tmail.fm> wrote:
>>
>>
>>
>> On 6/6/23 16:37, Miklos Szeredi wrote:
>>> On Sun, 14 May 2023 at 00:04, Askar Safin <safinaskar@...il.com> wrote:
>>>>
>>>> Will this patch fix a long-standing fuse vs suspend bug? (
>>>> https://bugzilla.kernel.org/show_bug.cgi?id=34932 )
>>>
>>> No.
>>>
>>> The solution to the fuse issue is to freeze processes that initiate
>>> fuse requests *before* freezing processes that serve fuse requests.
>>>
>>> The problem is finding out which is which.  This can be complicated by
>>> the fact that a process could be both serving requests *and*
>>> initiating them (even without knowing).
>>>
>>> The best idea so far is to let fuse servers set a process flag
>>> (PF_FREEZE_LATE) that is inherited across fork/clone.  For example the
>>> sshfs server would do the following before starting request processing
>>> or starting ssh:
>>>
>>>     echo 1 > /proc/self/freeze_late
>>>
>>> This would make the sshfs and ssh processes be frozen after processes
>>> that call into the sshfs mount.
>>
>> Hmm, why would this need to be done manually on the server (daemon)
>> side? It could be automated on the fuse kernel side, for example in
>> process_init_reply() using current task context?
> 
> Setting the flag for the current task wouldn't be sufficient, it would
> need to set it for all threads of a process.  Even that wouldn't work
> for e.g. sshfs, which forks off ssh before starting request
> processing.

Assuming a fuse server process is not handing over requests to other 
threads/forked-processes, isn't the main issue that all fuse server 
tasks are frozen and none is left to take requests? A single non-frozen 
thread should be sufficient for that?


> 
> So I'd prefer setting this explicitly.   This could be done from
> libfuse, before starting threads.  Or, as in the case of sshfs, it
> could be done by the filesystem itself.

With a flag that should work, with my score proposal it would be difficult.

> 
>>
>> A slightly better version would give scores, the later the daemon/server
>> is created the higher its freezing score - would help a bit with stacked
>> fuse file systems, although not perfectly. For that struct task would
>> need to be extended, though.
> 
> If we can quiesce the top of the stack, then hopefully all the lower
> ones will also have no activity.   There could be special cases, but
> that would need to be dealt with in the fuse server itself.


Ah, when all non flagged processes are frozen first no IO should come 
in. Yeah, it mostly works, but I wonder if init/systemd is not going to 
set that flag as well. And then you have an issue when fuse is on a file 
system used by systemd. My long time ago initial interest on fuse is to 
use fuse as root file system and I still do that for some cases - not 
sure if a flag would be sufficient here. I think a freezing score would 
solve more issues.
Although probably better to do step by step - flag first and score can 
be added later.



Thanks,
Bernd

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ