lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <f2933207-a48e-11a6-7dad-0081cae84e06@molgen.mpg.de>
Date:   Wed, 30 Dec 2020 17:54:13 +0100
From:   Donald Buczek <buczek@...gen.mpg.de>
To:     Hillf Danton <hdanton@...a.com>
Cc:     linux-xfs@...r.kernel.org, Brian Foster <bfoster@...hat.com>,
        Dave Chinner <david@...morbit.com>,
        LKML <linux-kernel@...r.kernel.org>, it+linux-xfs@...gen.mpg.de
Subject: Re: [PATCH] xfs: Wake CIL push waiters more reliably

On 30.12.20 03:46, Hillf Danton wrote:
> On Wed, 30 Dec 2020 00:56:27 +0100
>> Threads, which committed items to the CIL, wait in the xc_push_wait
>> waitqueue when used_space in the push context goes over a limit. These
>> threads need to be woken when the CIL is pushed.
>>
>> The CIL push worker tries to avoid the overhead of calling wake_all()
>> when there are no waiters waiting. It does so by checking the same
>> condition which caused the waits to happen. This, however, is
>> unreliable, because ctx->space_used can actually decrease when items are
>> recommitted. If the value goes below the limit while some threads are
>> already waiting but before the push worker gets to it, these threads are
>> not woken.
> 
> Looks like you are fixing a typo in c7f87f3984cf ("xfs: fix
> use-after-free on CIL context on shutdown") in mainline, and
> it may mean
> 
> 	/*
> 	 * Wake up any background push waiters now this context is being pushed
> 	 * if we are no longer over the space limit
> 	 */
> 
> given waiters throttled for comsuming more space than limit in
> xlog_cil_push_background().

I'm not sure, I understand you correctly. Do you suggest to update the comment to "...if we are no longer over the space limit"  and change the code to `if (ctx->space_used < XLOG_CIL_BLOCKING_SPACE_LIMIT(log))` ?

I don't think, that would be correct.

The current push context is most probably still over the limit if it ever was. It is exceptional, that the few bytes, which might be returned to ctx->space_used, bring us back below the limit. The new context, on the other hand, will have space_used=0.

We need to resume any thread which is waiting for the push.

Best
   Donald

>> Always wake all CIL push waiters. Test with waitqueue_active() as an
>> optimization. This is possible, because we hold the xc_push_lock
>> spinlock, which prevents additions to the waitqueue.
>>
>> Signed-off-by: Donald Buczek <buczek@...gen.mpg.de>
>> ---
>>   fs/xfs/xfs_log_cil.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c
>> index b0ef071b3cb5..d620de8e217c 100644
>> --- a/fs/xfs/xfs_log_cil.c
>> +++ b/fs/xfs/xfs_log_cil.c
>> @@ -670,7 +670,7 @@ xlog_cil_push_work(
>>   	/*
>>   	 * Wake up any background push waiters now this context is being pushed.
>>   	 */
>> -	if (ctx->space_used >= XLOG_CIL_BLOCKING_SPACE_LIMIT(log))
>> +	if (waitqueue_active(&cil->xc_push_wait))
>>   		wake_up_all(&cil->xc_push_wait);
>>   
>>   	/*
>> -- 
>> 2.26.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ