[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y4Y5BjTwVCF5bAn5@smile.fi.intel.com>
Date: Tue, 29 Nov 2022 18:53:26 +0200
From: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
To: Bartosz Golaszewski <brgl@...ev.pl>
Cc: Kent Gibson <warthog618@...il.com>,
Linus Walleij <linus.walleij@...aro.org>,
linux-gpio@...r.kernel.org, linux-kernel@...r.kernel.org,
Bartosz Golaszewski <bartosz.golaszewski@...aro.org>
Subject: Re: [PATCH v3 2/2] gpiolib: protect the GPIO device against being
dropped while in use by user-space
On Tue, Nov 29, 2022 at 01:35:53PM +0100, Bartosz Golaszewski wrote:
> From: Bartosz Golaszewski <bartosz.golaszewski@...aro.org>
>
> While any of the GPIO cdev syscalls is in progress, the kernel can call
> gpiochip_remove() (for instance, when a USB GPIO expander is disconnected)
> which will set gdev->chip to NULL after which any subsequent access will
> cause a crash.
>
> To avoid that: use an RW-semaphore in which the syscalls take it for
> reading (so that we don't needlessly prohibit the user-space from calling
> syscalls simultaneously) while gpiochip_remove() takes it for writing so
> that it can only happen once all syscalls return.
...
I would do
typedef __poll_t (*poll_fn)(struct file *, struct poll_table_struct *);
and so on and use that one in the respective parameters.
BUT. Since it's a fix, up to you which one to choose.
> +static __poll_t call_poll_locked(struct file *file,
> + struct poll_table_struct *wait,
> + struct gpio_device *gdev,
> + __poll_t (*func)(struct file *,
> + struct poll_table_struct *))
> +{
> + __poll_t ret;
> +
> + down_read(&gdev->sem);
> + ret = func(file, wait);
> + up_read(&gdev->sem);
> +
> + return ret;
> +}
...
> + down_write(&gdev->sem);
+ Blank line?
> /* FIXME: should the legacy sysfs handling be moved to gpio_device? */
> gpiochip_sysfs_unregister(gdev);
> gpiochip_free_hogs(gc);
...
> gcdev_unregister(gdev);
+ Blank line ?
> + up_write(&gdev->sem);
> put_device(&gdev->dev);
--
With Best Regards,
Andy Shevchenko
Powered by blists - more mailing lists