[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA9_cmfGQ0pXJ0vQt0uRUvz7EZYKv4goHcpf3Ewbn3rM0N204w@mail.gmail.com>
Date: Wed, 15 Jul 2015 10:00:54 -0700
From: Dan Williams <dan.j.williams@...il.com>
To: Laurent Pinchart <laurent.pinchart@...asonboard.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Tejun Heo <tj@...nel.org>
Subject: Re: Is devm_* broken ?
[ adding Tejun ]
On Tue, Jul 14, 2015 at 3:34 PM, Laurent Pinchart
<laurent.pinchart@...asonboard.com> wrote:
> Hello,
>
> I came to realize not too long ago that the following sequence of events will
> lead to a crash with any platform driver that uses devm_* and creates device
> nodes.
>
> 1. Get a platform device bound it its driver
> 2. Open the corresponding device node in userspace and keep it open
> 3. Unbind the platform device from its driver through sysfs
>
> echo <device-name> > /sys/bus/platform/drivers/<driver-name>/unbind
>
> (or for hotpluggable devices just unplug the device)
>
> 4. Close the device node
> 5. Enjoy the fireworks
>
> While having a device node open prevents modules from being unloaded, it
> doesn't prevent devices from being unbound from drivers. If the driver uses
> devm_* helpers to allocate memory the memory will be freed when the device is
> unbound from the driver, but that memory will still be used by any operation
> touching an open device node.
>
> Is devm_* inherently broken ? It's so widely used, tell me I'm missing
> something obvious.
Sounds like a real problem. The drivers I've used devm with have an
upper layer that prevents this crash, but that's not much consolation.
I think adding lifetime to devm allocations would be useful that way
->probe() and open() can do a devres_get() while ->remove() and
close() can do a devres_put(). Perhaps I'm also missing something
obvious though...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists