lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 19 Jun 2013 11:30:21 +0530
From:	Akhil Goyal <akhil.goyal@...escale.com>
To:	Arnd Bergmann <arnd@...db.de>
CC:	<gregkh@...uxfoundation.org>, Steven Rostedt <rostedt@...dmis.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	<linux-kernel@...r.kernel.org>, <pankaj.chauhan@...escale.com>
Subject: Re: [PATCH 1/5] drivers/misc: Support for RF interface device framework

On 6/18/2013 7:10 PM, Arnd Bergmann wrote:
> On Tuesday 18 June 2013, Akhil Goyal wrote:
>> On 6/18/2013 2:58 AM, Arnd Bergmann wrote:
>>>> +	/*
>>>> +	 * Spin_locks are changed to mutexes if PREEMPT_RT is enabled,
>>>> +	 * i.e they can sleep. This fact is problem for us because
>>>> +	 * add_wait_queue()/wake_up_all() takes wait queue spin lock.
>>>> +	 * Since spin lock can sleep with PREEMPT_RT, wake_up_all() can not be
>>>> +	 * called from rf_notify_dl_tti (which is called in interrupt context).
>>>> +	 * As a workaround, wait_q_lock is used for protecting the wait_q and
>>>> +	 * add_wait_queue_locked()/ wake_up_locked() functions of wait queues
>>>> +	 * are used.
>>>> +	 */
>>>> +	raw_spin_lock_irqsave(&rf_dev->wait_q_lock, flags);
>>>> +	__add_wait_queue_tail_exclusive(&rf_dev->wait_q,&wait);
>>>> +	raw_spin_unlock_irqrestore(&rf_dev->wait_q_lock, flags);
>>>> +	set_current_state(TASK_INTERRUPTIBLE);
>>>> +	/*Now wait here, tti notificaion will wake us up*/
>>>> +	schedule();
>>>> +	set_current_state(TASK_RUNNING);
>>>> +	raw_spin_lock_irqsave(&rf_dev->wait_q_lock, flags);
>>>> +	__remove_wait_queue(&rf_dev->wait_q,&wait);
>>>> +	raw_spin_unlock_irqrestore(&rf_dev->wait_q_lock, flags);
>>>
>>> This is not a proper method of waiting for an event. Why can't you
>>> use wait_event() here?
>> wait_event() is internally calling spin_lock_irqsave() and this function
>> will be called in hard IRQ context with PREEMPT_RT enabled(IRQF_NODELAY
>> set). So wait_event cannot be used.
>> This problem can be solved if we can get the following patch applied on
>> the tree.
>> https://patchwork.kernel.org/patch/2161261/
>
> I see. How about using wait_event here then and adding a comment about
> the RT kernel?
We can change it to wait_event but the problem is that, the ISR in 
Antenna Controller driver will always run in HARDIRQ context because of 
its latency requirements. In that case we will always get warning for 
"Trying to sleep in interrupt context".

Since we always require PREEMPT_RT patch while working with Antenna 
Controller Driver and there is no use case for running it in non-RT 
kernel. May be we can add dependency on CONFIG_PREEMPT_RT in the Kconfig 
of this framework/driver.

If the patch "Simple waitqueue implementation" from Steven Rostedt 
<rostedt@...dmis.org> gets mainlined then we can use simple wait queues 
to make a clean up here.
>
>>> The explanation about the interrupt handler seems incorrect, since PREEMPT_RT
>>> also turns interrupt handlers into threads.
>> The interrupt handler has real time requirement and thus running in
>> HARDIRQ context with flag IRQF_NODELAY. We get this interrupt in every
>> millisecond.
>
> Ok. So there would be no problem without the RT patch set.
This driver always require PREEMPT_RT enabled. As mentioned above I can 
add dependency on CONFIG_PREEMPT_RT.
>
> IRQF_NODELAY is specific to the RT kernel, so you can change the wait_event
> function to something else in the same patch that adds this flag.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ