lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 5 Dec 2019 10:31:20 +0100
From:   Bartosz Golaszewski <bgolaszewski@...libre.com>
To:     Andy Shevchenko <andy.shevchenko@...il.com>
Cc:     Bartosz Golaszewski <brgl@...ev.pl>,
        Kent Gibson <warthog618@...il.com>,
        Linus Walleij <linus.walleij@...aro.org>,
        "open list:GPIO SUBSYSTEM" <linux-gpio@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 07/11] gpiolib: rework the locking mechanism for
 lineevent kfifo

śr., 4 gru 2019 o 23:25 Andy Shevchenko <andy.shevchenko@...il.com> napisał(a):
>
> On Wed, Dec 4, 2019 at 6:01 PM Bartosz Golaszewski <brgl@...ev.pl> wrote:
> >
> > From: Bartosz Golaszewski <bgolaszewski@...libre.com>
> >
> > The read_lock mutex is supposed to prevent collisions between reading
> > and writing to the line event kfifo but it's actually only taken when
> > the events are being read from it.
> >
> > Drop the mutex entirely and reuse the spinlock made available to us in
> > the waitqueue struct. Take the lock whenever the fifo is modified or
> > inspected. Drop the call to kfifo_to_user() and instead first extract
> > the new element from kfifo when the lock is taken and only then pass
> > it on to the user after the spinlock is released.
> >
>
> My comments below.
>
> > +       spin_lock(&le->wait.lock);
> >         if (!kfifo_is_empty(&le->events))
> >                 events = EPOLLIN | EPOLLRDNORM;
> > +       spin_unlock(&le->wait.lock);
>
> Sound like a candidate to have kfifo_is_empty_spinlocked().

Yeah, I noticed but I thought I'd just add it later separately - it's
always easier to merge self-contained series.

>
>
> >         struct lineevent_state *le = filep->private_data;
> > -       unsigned int copied;
> > +       struct gpioevent_data event;
> >         int ret;
>
> > +       if (count < sizeof(event))
> >                 return -EINVAL;
>
> This still has an issue with compatible syscalls. See patch I have
> sent recently.
> I dunno how you see is the better way: a) apply mine and rebase your
> series, or b) otherwise.
> I can do b) if you think it shouldn't be backported.
>

Looking at your patch it seems to me it's best to rebase yours on top
of this one - where I simply do copy_to_user() we can add a special
case for 32-bit user-space. I can try to do this myself for v3 if you
agree.

Bart

> Btw, either way we have a benifits for the following one (I see you
> drop kfifo_to_user() and add event variable on stack).
>
> > +       return sizeof(event);
>
> Also see comments in my patch regarding the event handling.
>
> --
> With Best Regards,
> Andy Shevchenko

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ