[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3974817ea942f616b77450914aa23b181b062d87.camel@redhat.com>
Date: Thu, 24 Jun 2021 13:13:47 +0200
From: Nicolas Saenz Julienne <nsaenzju@...hat.com>
To: Andy Shevchenko <andy.shevchenko@...il.com>
Cc: Jonathan Cameron <jic23@...nel.org>,
Lars-Peter Clausen <lars@...afoo.de>,
linux-iio <linux-iio@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Matt Ranostay <matt.ranostay@...sulko.com>
Subject: Re: [PATCH] iio: chemical: atlas-sensor: Avoid using irq_work
Hi Andy, thanks for the review.
On Thu, 2021-06-24 at 13:39 +0300, Andy Shevchenko wrote:
> On Thu, Jun 24, 2021 at 1:01 PM Nicolas Saenz Julienne
> <nsaenzju@...hat.com> wrote:
> >
> > The atlas sensor driver currently registers a threaded IRQ handler whose
> > sole responsibility is to trigger an irq_work which will in turn run
> > iio_trigger_poll() in IRQ context.
> >
> > This seems overkill given the fact that there already was a opportunity
>
> an opportunity
Thanks, noted.
> > @@ -474,7 +465,7 @@ static irqreturn_t atlas_interrupt_handler(int irq, void *private)
> > struct iio_dev *indio_dev = private;
> > struct atlas_data *data = iio_priv(indio_dev);
> >
> > - irq_work_queue(&data->work);
> > + iio_trigger_poll(data->trig);
>
> Have you considered dropping atlas_interrupt_trigger_ops() altogether?
Not really, but it makes sense as a separate patch. I'll take care of it.
>
> > if (client->irq > 0) {
> > /* interrupt pin toggles on new conversion */
> > ret = devm_request_threaded_irq(&client->dev, client->irq,
>
> > - NULL, atlas_interrupt_handler,
> > + atlas_interrupt_handler, NULL,
>
> So, you move it from threaded IRQ to be a hard IRQ handler (we have a
> separate call for this).
Noted.
> Can you guarantee that handling of those events will be fast enough?
Do you mean the events triggered in iio_trigger_poll()? If so the amount of
time spent in IRQ context is going to be the same regardless of whether it's
handled through atlas' IRQ or later in irq_work IPI (or softirq context on some
weird platforms).
--
Nicolás Sáenz
Powered by blists - more mailing lists