lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7666958f-6215-a8eb-3412-b613158406db@virtuozzo.com>
Date:   Sun, 16 Jan 2022 15:44:21 +0300
From:   Vasily Averin <vvs@...tuozzo.com>
To:     "J. Bruce Fields" <bfields@...ldses.org>
Cc:     Trond Myklebust <trond.myklebust@...merspace.com>,
        Anna Schumaker <anna.schumaker@...app.com>, kernel@...nvz.org,
        linux-nfs@...r.kernel.org, linux-kernel@...r.kernel.org,
        Chuck Lever <chuck.lever@...cle.com>
Subject: Re: [PATCH v3 2/3] nfs4: handle async processing of F_SETLK with
 FL_SLEEP

On 03.01.2022 22:53, J. Bruce Fields wrote:
> On Wed, Dec 29, 2021 at 11:24:43AM +0300, Vasily Averin wrote:
>> nfsd and lockd use F_SETLK cmd with the FL_SLEEP flag set to request
>> asynchronous processing of blocking locks.
>>
>> Currently nfs4 use locks_lock_inode_wait() function which is blocked
>> for such requests. To handle them correctly FL_SLEEP flag should be
>> temporarily reset before executing the locks_lock_inode_wait() function.
>>
>> Additionally block flag is forced to set, to translate blocking lock to
>> remote nfs server, expecting it supports async processing of the blocking
>> locks too.
> 
> But this on its own isn't enough for the client to support asynchronous
> blocking locks, right?  Don't we also need the logic that calls knfsd's
> lm_notify when it gets a CB_NOTIFY_LOCK from the server?

No, I think this should be enough.
We are here a nfs client,
we can get F_SETLK with FL_SLEEP from nfsd only (i.e. in re-export case)
we need to avoid blocking if lock is already taken, 
so we need to call locks_lock_inode_wait without FL_SLEEP,
then we submit _sleeping_ request to NFS server (i.e. set )data->arg.block = 1)
and waiting for reply from server.

Here we rely that server will NOT block on such request too, so our reply wel not be blocked too.
Under "block" I mean that handler can sleep or process request for a very long time 
but it will NOT BE BLOCKED if lock is taken already, it WILL NOT WAIT when lock will be released,
it just return some error in this case.

I think it is correct.
Do you think I am wrong or maybe I missed something? 

Thank you,
	Vasily Averin

However I noticed now that past is incorrect, 
temporally dropped FL_SLEEP should be restored back in _nfs4_proc_setlk before _nfs4_do_setlk() call.
I'll fix it in next version of this patch-set.

>> https://bugzilla.kernel.org/show_bug.cgi?id=215383
>> Signed-off-by: Vasily Averin <vvs@...tuozzo.com>
>> ---
>>  fs/nfs/nfs4proc.c | 5 ++++-
>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
>> index ee3bc79f6ca3..9b1380c4223c 100644
>> --- a/fs/nfs/nfs4proc.c
>> +++ b/fs/nfs/nfs4proc.c
>> @@ -7094,7 +7094,7 @@ static int _nfs4_do_setlk(struct nfs4_state *state, int cmd, struct file_lock *f
>>  			recovery_type == NFS_LOCK_NEW ? GFP_KERNEL : GFP_NOFS);
>>  	if (data == NULL)
>>  		return -ENOMEM;
>> -	if (IS_SETLKW(cmd))
>> +	if (IS_SETLKW(cmd) || (fl->fl_flags & FL_SLEEP))
>>  		data->arg.block = 1;
>>  	nfs4_init_sequence(&data->arg.seq_args, &data->res.seq_res, 1,
>>  				recovery_type > NFS_LOCK_NEW);
>> @@ -7200,6 +7200,9 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int cmd, struct file_lock
>>  	int status;
>>  
>>  	request->fl_flags |= FL_ACCESS;
>> +	if (((fl_flags & FL_SLEEP_POSIX) == FL_SLEEP_POSIX) && IS_SETLK(cmd))
>> +		request->fl_flags &= ~FL_SLEEP;
>> +
>>  	status = locks_lock_inode_wait(state->inode, request);
>>  	if (status < 0)
>>  		goto out;
>> -- 
>> 2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ