[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHp75Vf7+XY8rnrbMfMgNO25EHSemjZVUgvFFp+zvj4vvJ1B8g@mail.gmail.com>
Date: Thu, 5 Dec 2019 00:25:46 +0200
From: Andy Shevchenko <andy.shevchenko@...il.com>
To: Bartosz Golaszewski <brgl@...ev.pl>
Cc: Kent Gibson <warthog618@...il.com>,
Linus Walleij <linus.walleij@...aro.org>,
"open list:GPIO SUBSYSTEM" <linux-gpio@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Bartosz Golaszewski <bgolaszewski@...libre.com>
Subject: Re: [PATCH v2 07/11] gpiolib: rework the locking mechanism for
lineevent kfifo
On Wed, Dec 4, 2019 at 6:01 PM Bartosz Golaszewski <brgl@...ev.pl> wrote:
>
> From: Bartosz Golaszewski <bgolaszewski@...libre.com>
>
> The read_lock mutex is supposed to prevent collisions between reading
> and writing to the line event kfifo but it's actually only taken when
> the events are being read from it.
>
> Drop the mutex entirely and reuse the spinlock made available to us in
> the waitqueue struct. Take the lock whenever the fifo is modified or
> inspected. Drop the call to kfifo_to_user() and instead first extract
> the new element from kfifo when the lock is taken and only then pass
> it on to the user after the spinlock is released.
>
My comments below.
> + spin_lock(&le->wait.lock);
> if (!kfifo_is_empty(&le->events))
> events = EPOLLIN | EPOLLRDNORM;
> + spin_unlock(&le->wait.lock);
Sound like a candidate to have kfifo_is_empty_spinlocked().
> struct lineevent_state *le = filep->private_data;
> - unsigned int copied;
> + struct gpioevent_data event;
> int ret;
> + if (count < sizeof(event))
> return -EINVAL;
This still has an issue with compatible syscalls. See patch I have
sent recently.
I dunno how you see is the better way: a) apply mine and rebase your
series, or b) otherwise.
I can do b) if you think it shouldn't be backported.
Btw, either way we have a benifits for the following one (I see you
drop kfifo_to_user() and add event variable on stack).
> + return sizeof(event);
Also see comments in my patch regarding the event handling.
--
With Best Regards,
Andy Shevchenko
Powered by blists - more mailing lists