[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E4E395B.7070106@redhat.com>
Date: Fri, 19 Aug 2011 12:22:19 +0200
From: Milan Broz <mbroz@...hat.com>
To: "Eric W. Biederman" <ebiederm@...ssion.com>
CC: device-mapper development <dm-devel@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Kay Sievers <kay.sievers@...y.org>,
"David S. Miller" <davem@...emloft.net>, containers@...ts.osdl.org
Subject: Re: [dm-devel] clone() with CLONE_NEWNET breaks kobject_uevent_env()
On 08/19/2011 11:13 AM, Eric W. Biederman wrote:
> Milan Broz <mbroz@...hat.com> writes:
>
> I think the proper fix is to remove the error return from
> kobject_uevent_env and kobject_uevent, and make it harder to get calling
> of this function wrong. Possibly in conjunction with that tag all of
> the memory allocations of kobject_uevent_env with GFP_NOFAIL or
> something so the memory allocator knows that this path is totally
> not able to deal with failure.
>
> Is kobject_uevent_env anything except an asynchronous best effort
> notification to user-space that a device has come or gone?
Unfortunately it is for device-mapper. libdevmapper
depends on information that uevent was sent because udev rules uses
semaphore to inform that some action was taken.
So if dm-ioctl returns flag that uevent was not sent, it fallback
to different error path (otherwise it waits for completion forever).
(TBH I am more and more convinced this was not quite clever concept.)
But the whole concept "send event to the list of namespaces, maybe someone listen"
seems also not quite clever to me :-)
How much time consuming is that? If you create thousand(s) of cloned namespaces,
how it will perform with uevent notification performance?
(IOW first event is sent through netlink and 999+ reports failure... strange.)
Milan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists