lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <edcc6397-6cf9-a629-56bd-8f3bd779d1bd@gmail.com>
Date:   Sat, 11 Aug 2018 10:50:33 +0800
From:   Jia-Ju Bai <baijiaju1990@...il.com>
To:     Steven Rostedt <rostedt@...dmis.org>
Cc:     peterz@...radead.org, mingo@...hat.com, will.deacon@....com,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] kernel: locking: rtmutex: Fix a possible
 sleep-in-atomic-context bug in rt_mutex_handle_deadlock()



On 2018/8/11 10:44, Steven Rostedt wrote:
> On Sat, Aug 11, 2018 at 10:35:24AM +0800, Jia-Ju Bai wrote:
>> The driver may sleep with holding a spinlock.
>>
>> The function call paths (from bottom to top) in Linux-4.16 are:
>>
>> [FUNC] schedule
>> kernel/locking/rtmutex.c, 1223:
>> 	schedule in rt_mutex_handle_deadlock
>> kernel/locking/rtmutex.c, 1273:
>> 	rt_mutex_handle_deadlock in rt_mutex_slowlock
>> kernel/locking/rtmutex.c, 1249:
>> 	_raw_spin_lock_irqsave in rt_mutex_slowlock
>>
>> To fix the bug, the spinlock is released before schedule() and then acquired again.
>> This is found by my static analysis tool (DSAC).
>>
>> Signed-off-by: Jia-Ju Bai <baijiaju1990@...il.com>
>> ---
>>   kernel/locking/rtmutex.c | 6 ++++--
>>   1 file changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
>> index 2823d4163a37..af03e162f812 100644
>> --- a/kernel/locking/rtmutex.c
>> +++ b/kernel/locking/rtmutex.c
>> @@ -1205,7 +1205,7 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state,
>>   }
>>   
>>   static void rt_mutex_handle_deadlock(int res, int detect_deadlock,
>> -				     struct rt_mutex_waiter *w)
>> +				     struct rt_mutex_waiter *w, struct rt_mutex *lock)
>>   {
>>   	/*
>>   	 * If the result is not -EDEADLOCK or the caller requested
>> @@ -1219,8 +1219,10 @@ static void rt_mutex_handle_deadlock(int res, int detect_deadlock,
>>   	 */
>>   	rt_mutex_print_deadlock(w);
>>   	while (1) {
>> +		raw_spin_unlock_irq(&lock->wait_lock);
>>   		set_current_state(TASK_INTERRUPTIBLE);
>>   		schedule();
>> +		raw_spin_lock_irq(&lock->wait_lock);
>>   	}
> If you look at the code you will notice that it stops the task and never lets
> it continue. Ever.
>
> If we hit this path, it means we are in a deadlock scenario and will not make
> any forward progress.
>
> If anything, it should simply be:
>
> 	rt_mutex_print_deadlock(w);
> +	/* We're not going anywhere, release the wait_lock */
> +	raw_spin_unlock_irq(&lock->wait_lock);
> 	while (1) {
> 		set_current_state(TASK_INTERRUPTIBLE);
> 		schedule();
> 	}

Thanks for your reply :)

Okay, I will send a V2 patch.


Best wishes,
Jia-Ju Bai

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ