[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2562815.JQOcF2Dmj3@percival>
Date: Fri, 7 Dec 2012 16:02:02 +0900
From: Alex Courbot <acourbot@...dia.com>
To: Guenter Roeck <linux@...ck-us.net>
CC: Grant Likely <grant.likely@...retlab.ca>,
Linus Walleij <linus.walleij@...aro.org>,
Arnd Bergmann <arnd@...db.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"gnurou@...il.com" <gnurou@...il.com>
Subject: Re: [RFC] gpiolib: introduce descriptor-based GPIO interface
Hi Guenter,
On Friday 07 December 2012 10:49:47 Guenter Roeck wrote:
> My own idea for a solution was to keep integer based handles, but replace
> gpio_desc[] with a hash table. But ultimately I don't really care how
> it is done.
>
> Do you have a solution for gpiochip_find_base() in mind, and how to handle
> reservations ? I had thought about using bit maps, but maybe there is
> a better solution.
My plan so far is to use a sorted linked list of gpio_chips. Each chip
contains its base address and size, so this will make it possible to find
usable areas through a single parse. Current gpiochip_find_base() start from
ARCH_NR_GPIOS and look backwards in the integer space to find a free range, a
similar behavior can also be done if this is deemed better (GPIO numbers might
become high, but since we want to hide them this should not matter).
The counterpart of the list is that fetching the descriptor corresponding to a
GPIO number is going to be linear instead of constant, but (1) the number of
gpio_chips on the system should never grow very high and (2) this is a good
incentive to use the descriptor-based API instead. :) Existing code could
easily be converted - once a GPIO is acquired, its number should be converted
immediatly to a descriptor and the gpiod_* functions used from them on. We can
probably write a sed or Coccinelle rule to do that through the whole kernel.
gpiochip_reserve() will require some more thinking using this model, but
something like a dummy chip can probably be introduced in the list. It will
need to be statically allocated however since memory allocation cannot be used
there.
Actually the only user of gpiochip_reserve() seems to be Tosa, so I wonder if
there would be no way to simply get rid of it?
> If/when you have some code to test, please let me know.
Sure!
> > + mutex_lock(&sysfs_lock);
> >
> > /* can't export until sysfs is available ... */
> > if (!gpio_class.p) {
> >
> > @@ -713,13 +743,7 @@ int gpio_export(unsigned gpio, bool
> > direction_may_change)>
> > return -ENOENT;
>
> mutex is not released here.
Oops.
> > - chip = gpio_to_chip(gpio);
> > + chip = desc->chip;
> > + gpio = gpio_chip_offset(desc);
>
> Might be better to use a separate variable named 'offset' when dealing with
> the offset, to avoid confusion and accidential bugs. You are doing it
> below, so you might as well do it everywhere. This would also simplify the
> code in a couple of places where gpio is first converted into an offset
> only to use "chip->base + gpio" later on.
Makes sense. Fixed, thanks.
> > - __func__, gpio, err);
> > + __func__, offset + chip->base, err);
>
> I know I am nitpicking, but everywhere in the existing code it is
> "chip->base + offset/gpio".
Fixed.
> > return chip->to_irq ? chip->to_irq(chip, gpio - chip->base) :
> > -ENXIO;
>
> s/gpio - chip->base/gpio_chip_offset(desc)/
>
> then you don't need gpio at all.
Done, thanks.
> > - chip = gpio_to_chip(gpio);
> > + chip = desc->chip;
> > + gpio = desc_to_gpio(desc);
> >
> > trace_gpio_value(gpio, 0, value);
> > if (test_bit(FLAG_OPEN_DRAIN, &gpio_desc[gpio].flags))
>
> Use desc directly.
>
> > - _gpio_set_open_drain_value(gpio, chip, value);
> > + _gpio_set_open_drain_value(desc, value);
> >
> > else if (test_bit(FLAG_OPEN_SOURCE, &gpio_desc[gpio].flags))
>
> Use desc directly.
Right.
Thanks,
Alex.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists