[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAVeFuK35oXsrSsbdhGckjhfC=B6=aizqZYv2+0CQxAYQoYVGw@mail.gmail.com>
Date: Tue, 12 May 2015 15:37:47 +0900
From: Alexandre Courbot <gnurou@...il.com>
To: Johan Hovold <johan@...nel.org>
Cc: Linus Walleij <linus.walleij@...aro.org>,
"linux-gpio@...r.kernel.org" <linux-gpio@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Jonathan Corbet <corbet@....net>,
Harry Wei <harryxiyou@...il.com>,
Arnd Bergmann <arnd@...db.de>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
linux-kernel@...kernel.org, linux-arch <linux-arch@...r.kernel.org>
Subject: Re: [PATCH v2 00/23] gpio: sysfs: fixes and clean ups
On Tue, May 5, 2015 at 12:10 AM, Johan Hovold <johan@...nel.org> wrote:
> These patches fix a number of issues with the gpio sysfs interface,
> including
>
> - fix memory leaks and crashes on device hotplug
> - straighten out the convoluted locking
> - reduce sysfs-interface latencies through more fine-grained locking
> - more clearly separate the sysfs-interface implementation from gpiolib
> core
>
> The first patch is marked for stable and could go into 4.1. [ May
> already have been applied but not pushed by Linus, but included in v2
> for completeness. ]
>
> Unfortunately we can't just kill the gpio sysfs interface, but these
> patches will make it more manageable and should allow us to implement a
> new user-space interface while maintaining the old one (for a while at
> least) without losing our sanity.
>
> Note that there is still a race between chip remove and gpiod_request (and
> therefore sysfs export), which needs to be fixed separately (for instance as
> part of a generic solution to chip hotplugging).
Reiterating my
Reviewed-by: Alexandre Courbot <acourbot@...dia.com>
on this very nice series.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists