[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMRc=Mf5cYtWxAVeMQmxwyoi9oxtVSidBQsdRV9H2E52H1TqKQ@mail.gmail.com>
Date: Mon, 30 Mar 2020 16:36:57 +0200
From: Bartosz Golaszewski <brgl@...ev.pl>
To: Linus Walleij <linus.walleij@...aro.org>
Cc: Srinivas Kandagatla <srinivas.kandagatla@...aro.org>,
Khouloud Touil <ktouil@...libre.com>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
"open list:GPIO SUBSYSTEM" <linux-gpio@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Bartosz Golaszewski <bgolaszewski@...libre.com>
Subject: Re: [PATCH 2/3] gpiolib: use kref in gpio_desc
czw., 26 mar 2020 o 21:50 Linus Walleij <linus.walleij@...aro.org> napisał(a):
>
> On Fri, Mar 13, 2020 at 3:47 PM Bartosz Golaszewski <brgl@...ev.pl> wrote:
> > czw., 12 mar 2020 o 11:35 Linus Walleij <linus.walleij@...aro.org> napisał(a):
>
> > In this case I was thinking about a situation where we pass a
> > requested descriptor to some other framework (nvmem in this case)
> > which internally doesn't know anything about who manages this resource
> > externally. Now we can of course simply not do anything about it and
> > expect the user (who passed us the descriptor) to handle the resources
> > correctly. But what happens if the user releases the descriptor not on
> > driver detach but somewhere else for whatever reason while nvmem
> > doesn't know about it? It may try to use the descriptor which will now
> > be invalid. Reference counting in this case would help IMHO.
>
> I'm so confused because I keep believing it is reference counted
> elsewhere.
>
> struct gpio_desc *d always comes from the corresponding
> struct gpio_device *descs array. This:
>
> struct gpio_device {
> int id;
> struct device dev;
> (...)
> struct gpio_desc *descs;
> (...)
>
> This array is allocated in gpiochip_add_data_with_key() like this:
>
> gdev->descs = kcalloc(chip->ngpio, sizeof(gdev->descs[0]), GFP_KERNEL);
>
> Then it gets free:d in gpiodevice_release():
>
> static void gpiodevice_release(struct device *dev)
> {
> struct gpio_device *gdev = dev_get_drvdata(dev);
> (...)
> kfree(gdev->descs);
> kfree(gdev);
> }
>
> This is the .release function for the gdev->dev, the device inside
> struct gpio_device,
> i.e. the same device that contains the descs in the first place. So it
> is just living
> and dying with the struct gpio_device.
>
> struct gpio_device does *NOT* die in the devm_* destructor that gets called
> when someone has e.g. added a gpiochip using devm_gpiochip_add_data().
>
> I think the above observation is crucial: the lifetime of struct gpio_chip and
> struct gpio_device are decoupled. When the struct gpio_chip dies, that
> just "numbs" all gpio descriptors but they stay around along with the
> struct gpio_device that contain them until the last
> user is gone.
>
> The struct gpio_device reference counted with the call to get_device(&gdev->dev)
> in gpiod_request() which is on the path of gpiod_get_[index]().
>
> If a consumer gets a gpio_desc using any gpiod_get* API this gets
> increased and it gets decreased at every gpiod_put() or by the
> managed resources.
>
> So should you not rather exploit this fact and just add something
> like:
>
> void gpiod_reference(struct gpio_desc *desc)
> {
> struct gpio_device *gdev;
>
> VALIDATE_DESC(desc);
> gdev = desc->gdev;
> get_device(&gdev->dev);
> }
>
> void gpiod_unreference(struct gpio_desc *desc)
> {
> struct gpio_device *gdev;
>
> VALIDATE_DESC(desc);
> gdev = desc->gdev;
> put_device(&gdev->dev);
> }
>
> This should make sure the desc and the backing gpio_device
> stays around until all references are gone.
>
> NB: We also issue try_module_get() on the module that drives the
> GPIO, which will make it impossible to unload that module while it
> has active GPIOs. I think maybe the whole logic inside gpiod_request()
> is needed to properly add an extra reference to a gpiod else someone
> can (theoretically) pull out the module from below.
>
Thanks a lot for the detailed explanation. I'll make some time
(hopefully soon) to actually test this path and let you know if it
works as expected.
Best regards,
Bartosz Golaszewski
Powered by blists - more mailing lists