[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250312121053.GA132624@rigel>
Date: Wed, 12 Mar 2025 20:10:53 +0800
From: Kent Gibson <warthog618@...il.com>
To: David Jander <david@...tonic.nl>
Cc: Bartosz Golaszewski <brgl@...ev.pl>,
Bartosz Golaszewski <bartosz.golaszewski@...aro.org>,
Linus Walleij <linus.walleij@...aro.org>,
linux-gpio@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: regression: gpiolib: switch the line state notifier to atomic
unexpected impact on performance
On Wed, Mar 12, 2025 at 11:24:01AM +0100, David Jander wrote:
> >
> > Ok, if the issue is purely the name -> (chip,offset) mapping it is pretty
> > safe to assume that line names are immutable - though not unique, so
> > caching the mapping should be fine.
>
> On our board that would be fine, but what about hot-pluggable GPIO
> chips/devices, or modules that are loaded at a later time? I was thinking
> about libgpiod in general...
>
Indeed. A general solution suitable for libgpiod is a hard problem.
It is probably something better left to a higher layer.
> > The kernel can already tell userspace about a number of changes.
> > What changes are you concerned about - adding/removing chips?
>
> Yes, since the patches from Bartosz I understand that is indeed possible now
> ;-)
> No special concern, just thinking about general applicability of caching such
> information in libgpiod (especially considering the old version I am using
> presumably from long before the kernel could do this).
>
The change notifiers you are referring to have been there for some time
- they were first introduced late in uAPI v1.
I was also thinking of udev events for when devices are added or removed.
Though again, not something core libgpiod should be getting involved with.
> > > If this had been this slow always (even before 6.13), I would probably have
> > > done things a bit differently and cached the config requests to then "find"
> > > the lines in batches directly working on the character devices instead of
> > > using gpiod, so I could open/close each one just once for finding many
> > > different lines each time.
> >
> > The libgpiod v2 tools do just that - they scan the chips once rather
> > than once per line. But that functionality is not exposed in the
> > libgpiod v2 API as the C interface is hideous and it is difficult to
> > provide well defined behaviour (e.g. in what order are the chips scanned?).
> > So it is left to the application to determine how they want to do it.
> > There isn't even a find_line() equivalent in v2, IIRC.
>
> I think I should move to v2 as soon as I find the time to do it. ;-)
>
You certainly should - the v2 Python bindings are much more pythonic and
easier to use.
> In any case, I also could reproduce the issue with the gpiodetect tool from
> v2. You can visually see each found chips being printed individually on the
> terminal with kernel v6.13, while with 6.12 all chip names would appear
> "instantly". Hard to describe with words, but you could in fact tell which
> kernel was running just by looking at the terminal output of "gpiodetect"
> while it was being executed... my board has 16 gpio chips, so you can really
> see it "scrolling" up as it prints them with kernel 6.13.
>
For sure - the close() issue is a real issue.
> > > > Generally an application should request the lines it requires once and hold
> > > > them for the duration. Similarly functions such as find_line() should be
> > > > performed once per line.
> > >
> > > Of course it does that ;-)
> > > This board has a large amount of GPIO lines, and like I said, it is during the
> > > initial configuration phase of the application that I am seeing this problem.
> > >
> >
> > Good to hear - from your earlier description I was concerned that
> > you might be doing it continuously.
>
> Thanks. Tbh, I am quite proud of the efficiency and speed of the application
> itself. Being written in pure python and running on a rather slow Cortex-A7,
> it is surprisingly fast, controlling 16 stepper motors, reacting to 26 sensor
> inputs changing at a blazing high pace and continuously sampling several IIO
> adcs at 16kHz sample rate, all with rather low CPU usage (in python!). It makes
> heavy use of asyncio an thus of course the epoll() event mechanisms the kernel
> provides for GPIOs, IIO, LMC and other interfaces.
You can do some surprising things with Python.
> So you may understand that I was a bit triggered by your suggestion of
> inefficiency initially. Sorry ;-)
>
No problems - I've seen people do some silly things and just wanted
to be sure you weren't one of them ;-).
Glad everything is working for you.
Cheers,
Kent.
Powered by blists - more mailing lists