lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 2 Oct 2015 17:40:12 -0500
From:	Grygorii Strashko <grygorii.strashko@...com>
To:	Linus Walleij <linus.walleij@...aro.org>
CC:	Alexandre Courbot <gnurou@...il.com>,
	Santosh Shilimkar <ssantosh@...nel.org>,
	Tony Lindgren <tony@...mide.com>,
	Linux-OMAP <linux-omap@...r.kernel.org>,
	Austin Schuh <austin@...oton-tech.com>,
	<philipp@...oton-tech.com>, <linux-rt-users@...r.kernel.org>,
	"linux-gpio@...r.kernel.org" <linux-gpio@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] gpio: omap: convert to use generic irq handler

On 10/02/2015 03:17 PM, Linus Walleij wrote:
> On Fri, Sep 25, 2015 at 12:28 PM, Grygorii Strashko
> <grygorii.strashko@...com> wrote:
>
>> This patch converts TI OMAP GPIO driver to use generic irq handler
>> instead of chained IRQ handler. This way OMAP GPIO driver will be
>> compatible with RT kernel where it will be forced thread IRQ handler
>> while in non-RT kernel it still will be executed in HW IRQ context.
>> As part of this change the IRQ wakeup configuration is applied to
>> GPIO Bank IRQ as it now will be under control of IRQ PM Core during
>> suspend.
>>
>> There are also additional benefits:
>>   - on-RT kernel there will be no complains any more about PM runtime usage
>>     in atomic context  "BUG: sleeping function called from invalid context";
>>   - GPIO bank IRQs will appear in /proc/interrupts and its usage statistic
>>      will be  visible;
>>   - GPIO bank IRQs could be configured through IRQ proc_fs interface and,
>>     as result, could be a part of IRQ balancing process if needed;
>>   - GPIO bank IRQs will be under control of IRQ PM Core during
>>     suspend to RAM.
>>
>> Disadvantage:
>>   - additional runtime overhed as call chain till
>>     omap_gpio_irq_handler() will be longer now
>>   - necessity to use wa_lock in omap_gpio_irq_handler() to W/A warning
>>     in handle_irq_event_percpu()
>>     WARNING: CPU: 1 PID: 35 at kernel/irq/handle.c:149 handle_irq_event_percpu+0x51c/0x638()
>>
>> This patch doesn't fully follows recommendations provided by Sebastian
>> Andrzej Siewior [1], because It's required to go through and check all
>> GPIO IRQ pin states as fast as possible and pass control to handle_level_irq
>> or handle_edge_irq. handle_level_irq or handle_edge_irq will perform actions
>> specific for IRQ triggering type and wakeup corresponding registered
>> threaded IRQ handler (at least it's expected to be threaded).
>> IRQs can be lost if handle_nested_irq() will be used, because excecution
>> time of some pin specific GPIO IRQ handler can be very significant and
>> require accessing ext. devices (I2C).
>>
>> Idea of such kind reworking was also discussed in [2].
>>
>> [1] http://www.spinics.net/lists/linux-omap/msg120665.html
>> [2] http://www.spinics.net/lists/linux-omap/msg119516.html
>>
>> Tested-by: Tony Lindgren <tony@...mide.com>
>> Tested-by: Austin Schuh <austin@...oton-tech.com>
>> Signed-off-by: Grygorii Strashko <grygorii.strashko@...com>
>
> Patch applied.
>

Thanks.

> I'm thinking that we need some recommendations on how to write
> IRQ handlers in order to be RT-compatible. Can you help me lining
> up the requirements in Documentation/gpio/driver.txt?
>
> I will write an RFC patch and let you write some additional text
> to it in response then we can iterate it a bit.

Sure. I'll try to help.


-- 
regards,
-grygorii
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ