lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 2 Sep 2021 17:46:17 -0600
From:   Shuah Khan <skhan@...uxfoundation.org>
To:     Marco Elver <elver@...gle.com>
Cc:     tj@...nel.org, jiangshanlai@...il.com, akpm@...ux-foundation.org,
        andreyknvl@...il.com, dvyukov@...gle.com,
        walter-zh.wu@...iatek.com, linux-kernel@...r.kernel.org,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Shuah Khan <skhan@...uxfoundation.org>
Subject: Re: [PATCH v2] workqueue: Don't record workqueue stack holding
 raw_spin_lock

On 9/2/21 3:58 PM, Marco Elver wrote:
> On Thu, 2 Sept 2021 at 22:01, Shuah Khan <skhan@...uxfoundation.org> wrote:
>>
>> When CONFIG_PROVE_RAW_LOCK_NESTING=y and CONFIG_KASAN are enabled,
>> kasan_record_aux_stack() runs into "BUG: Invalid wait context" when
>> it tries to allocate memory attempting to acquire spinlock in page
>> allocation code while holding workqueue pool raw_spinlock.
>>

[snip]

>> Fix it by calling kasan_record_aux_stack() conditionally only when
>> CONFIG_PROVE_RAW_LOCK_NESTING is not enabled. After exploring other
>> options such as calling kasan_record_aux_stack() after releasing the
>> pool lock, opting for a least disruptive path of stubbing this record
>> function to avoid nesting raw spinlock and spinlock.
>>

[snip]

>>
>> Fixes: e89a85d63fb2 ("workqueue: kasan: record workqueue stack")
>> Signed-off-by: Shuah Khan <skhan@...uxfoundation.org>
>> ---
>> Changes since v1:
>> -- Instead of changing when record happens, disable record
>>     when CONFIG_PROVE_RAW_LOCK_NESTING=y
>>
>>   kernel/workqueue.c | 10 +++++++++-
>>   1 file changed, 9 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
>> index f148eacda55a..435970ef81ae 100644
>> --- a/kernel/workqueue.c
>> +++ b/kernel/workqueue.c
>> @@ -1328,8 +1328,16 @@ static void insert_work(struct pool_workqueue *pwq, struct work_struct *work,
>>   {
>>          struct worker_pool *pool = pwq->pool;
>>
>> -       /* record the work call stack in order to print it in KASAN reports */
>> +       /*
>> +        * record the work call stack in order to print it in KASAN reports
>> +        * Doing this when CONFIG_PROVE_RAW_LOCK_NESTING is enabled results
>> +        * in nesting raw spinlock with page allocation spinlock.
>> +        *
>> +        * Avoid recording when CONFIG_PROVE_RAW_LOCK_NESTING is enabled.
>> +        */
>> +#if !defined(CONFIG_PROVE_RAW_LOCK_NESTING)
> 
> Just "if (!IS_ENABLED(CONFIG_PROVE_RAW_LOCK_NESTING))" should work
> here, however...
> 

Yes. That would work.

> ... PROVE_RAW_LOCK_NESTING exists for PREEMPT_RT's benefit. I don't
> think silencing the debugging tool is the solution, because the bug
> still exists in a PREEMPT_RT kernel.
> 

This silencing is limited in scope to just the insert_work() and when
PROVE_RAW_LOCK_NESTING is enabled. Please see below under your proposed
option 2

> +Cc Sebastian for advice. I may have missed something obvious. :-)
> 

Thanks for adding Sebastian

> I have a suspicion that kasan_record_aux_stack() (via
> stack_depot_save()) is generally unsound on PREEMPT_RT kernels,
> because allocating memory cannot be done within raw-locked critical
> sections because memory allocation is preemptible on RT. Even using
> GWP_NOWAIT/ATOMIC doesn't help (which kasan_record_aux_stack() uses).
> 
> It follows that if we do not know what type of locks may be held when
> calling kasan_record_aux_stack() we have a bug in RT.
> 
> I see 3 options:
> 
> 1. Try to move kasan_record_aux_stack() where no raw lock is held.
> (Seems complicated per v1 attempt?)
> 

Yes. kasan_record_aux_stack() is better called from insert_work()
prior to insertion. This makes it difficult to do - we don't want
to release the pool lock.

> But ideally we make kasan_record_aux_stack() more robust on RT:
> 
> 2. Make kasan_record_aux_stack() a no-op on RT (and if
> PROVE_RAW_LOCK_NESTING). Perhaps overkill?
> 

I considered it and didn't go down that route because it is a big
hammer. I choose to just disable the debug code in insert_work()
path instead. Not ideal, but limits the disable to a narrower
scope. Limiting the scope in kasan_record_aux_stack() extends to
all other paths where kasan_record_aux_stack() is used.

> 3. Try to not allocate memory in stackdepot. Not sure this is feasible
> without telling stackdepot to preallocate the max slabs on boot if RT.
> 

We could. I have to ask though how much of the real world cases do we
need to impact for the debug code to work?

> Anything else? Because I don't think any of the options are satisfying.
> 

One option to consider is checking dry-run invalid nesting check and
bail out if it is true in kasan_record_aux_stack()

thanks,
-- Shuah

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ