lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D76119A.2030100@cam.ac.uk>
Date:	Tue, 08 Mar 2011 11:23:06 +0000
From:	Jonathan Cameron <jic23@....ac.uk>
To:	Thomas Gleixner <tglx@...utronix.de>
CC:	LKML <linux-kernel@...r.kernel.org>,
	"linux-iio@...r.kernel.org" <linux-iio@...r.kernel.org>
Subject: Re: Moving staging:iio over to threaded interrupts.

On 03/08/11 10:30, Thomas Gleixner wrote:
> On Thu, 3 Mar 2011, Jonathan Cameron wrote:
>> So to my mind two solutions exist.
>> 1) A single thread per trigger.  Everything prior to the work queue
>> calls is moved into a handler that goes in the 'fast' list which stays
>> in our top half handler.  The work queue bits are called one after
>> another in the bottom half.
>>
>> 2) Allow each consumer to attach it's own thread to the trigger
>> controller and basically implement our own variant of the core threaded
>> interrupt code that allows for a list of threads rather than a single one.
>>
>> I rather like the idea of 2.  It might even end up with different
>> devices being queried from different processor cores simultaneously
>> which is quite cute.  The question is whether a simple enough
>> implementation is possible that the originators of the threaded interrupt
>> code would be happy with it (as it bypasses or would mean additions to their
>> core code).
> 
> Don't implement another threading model. Look at the trigger irq as a
> demultiplexing interrupt. So if you have several consumers of a single
> trigger, then you can implement a pseudo irq_chip and register the sub
> devices as separate interrupts.
> 
> That means your main trigger interrupt would look like this:
> 
> irqreturn_t hardirq_handler(int irq, void *dev)
> {
>      iio_trigger_dev *idev = dev;
>      int i;
> 
>      store_state_as_necessary(idev);
> 
>      for (i = 0; i < idev->nr_subirqs; i++) {
>      	    if (idev->subirqs[i].enabled)
> 		generic_handle_irq(idev->subirq_base + i);
>      }
> }
> 
> And you'd have an irq_chip implementation which does:
> 
> static void subirq_mask(struct irq_data *d)
> {
>      iio_trigger_dev *idev = irq_data_get_irq_chip_data(d);
>      int idx = d->irq - idev->subirq_base;
> 
>      idev->subirqs[idx].enabled = false;
> }
> 
> static void subirq_unmask(struct irq_data *d)
> {
>      iio_trigger_dev *idev = irq_data_get_irq_chip_data(d);
>      int idx = d->irq - idev->subirq_base;
> 
>      idev->subirqs[idx].enabled = true;
> }
> 
> static struct irq_chip subirq_chip = {
>        .name = "iiochip",
>        .mask = subirq_mask,
>        .unmask = subirq_unmask,
> };
> 
> init()
> {
> 	for_each_subirq(i)
> 		irq_set_chip_and_handler(i, &subirq_chip, handle_simple_irq);
> }
> 
> So now you can request the interrupts for your subdevices with
> request_irq or request_threaded_irq.
> 
> You can also implement #1 this way, you just mark the sub device
> interrupts as IRQ_NESTED_THREAD, and then call the handlers from the
> main trigger irq thread.
Hi Thomas,

One issue here that I'm not quite sure how to overcome is that the trigger to
device mapping tends to be dynamic.  That is we quite often switch around
what device is triggered by which trigger at runtime.  All done via text label
matching via sysfs.

I guess we could maintain this by a spot of indirection and pool of interrupts per
trigger (with compile time control on how many). Any other approaches come to mind?

Thanks,

Jonathan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ