[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150120105712.GA6260@sig21.net>
Date: Tue, 20 Jan 2015 11:57:12 +0100
From: Johannes Stezenbach <js@...21.net>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: arnd@...db.de, ebiederm@...ssion.com, gnomes@...rguk.ukuu.org.uk,
teg@...m.no, jkosina@...e.cz, luto@...capital.net,
linux-api@...r.kernel.org, linux-kernel@...r.kernel.org,
daniel@...que.org, dh.herrmann@...il.com, tixxdz@...ndz.org
Subject: Re: [PATCH v3 00/13] Add kdbus implementation
On Tue, Jan 20, 2015 at 09:13:59AM +0800, Greg Kroah-Hartman wrote:
> On Tue, Jan 20, 2015 at 12:38:12AM +0100, Johannes Stezenbach wrote:
> > Those automotive applications you
> > were talking about, what was the OS they were ported from
> > and what was the messaging API they used?
>
> They were ported from QNX and I don't know the exact api, it is wrapped
> up in a library layer for them to use. And typically, they run about
> 40 thousand messages in the first few seconds of startup time. Or was
> it 400 thousand? Something huge and crazy to be doing on tiny ARM
> chips, but that's the IVI industry for you :(
So I did some googling and found in QNX servers create a channel
to receive messages, and clients connect to this channel.
Multiple clients can connect to the channel.
But it is not a bus -- no multicast/broadcast, and no name
service or policy rules like D-Bus has. To me it looks
to be similar in functionality to UNIX domain sockets.
My guess is that the people porting from QNX were just confused
and their use of D-Bus was in error. Maybe they should've used
plain sockets, capnproto, ZeroMQ or whatever.
> > As I said before, I'm seeing about a dozen D-Bus messages per minute,
> > nothing that would justify adding kdbus to the kernel for
> > performance reasons. Wrt security I'm also not aware of any
> > open issues with D-Bus. Thus I doubt normal users of D-Bus
> > would see any benefit from kdbus. I also think none of the
> > applications I can install from my distribution has any performance
> > issue with D-Bus.
>
> That's because people have not done anything really needing performance
> on the desktop over D-Bus in the past due to how slow the current
> implementation is. Now that this is being resolved, that can change,
> and there are demos out there of even streaming audio over kdbus with no
> problems.
>
> But performance is not just the only reason we want this in the kernel,
> I listed a whole long range of them. Sure, it's great to now be faster,
> cutting down the number of context switches and copies by a huge amount,
> but the other things are equally important for future development
> (namespaces, containers, security, early-boot, etc.)
>
> > And this is the point where I ask myself if I missed something.
>
> Don't focus purely on performance for your existing desktop system,
> that's not the only use case here. There are lots of others, as I
> document, that can benefit and want this.
>
> One "fun" thing I've been talking to someone about is the ability to
> even port binder to be on top of kdbus. But that's just a research
> project, and requires some API changes on the userspace binder side, but
> it shows real promise, and would then mean that we could deprecate the
> old binder code and a few hundred million devices could then use kdbus
> instead. But that's long-term goals, not really all that relevant here,
> but it shows that having a solid bus IPC mechanism is a powerful thing
> that we have been missing in the past from Linux.
Well, IMHO you got it backwards. Before adding a complex new IPC
API to the kernel you should do the homework and gather some
evidence that there will be enough users to justify the addition.
But maybe I misunderstood the purpose of this thread and you're
just advertising it to find possible users instead of already
suggesting to merge it? If someone has some convincing story
to share why kdbus would solve their IPC needs, I'm all ears.
(I'm sorry this implies your responses so far were not convincing:
not verifiable facts, no numbers, no testimonials etc.)
FWIW, my gut feeling was that the earlier attempts to add a new
IPC primitve like multicast UNIX domain sockets
http://thread.gmane.org/gmane.linux.kernel/1255575/focus=1257999
were a much saner approach. But now I think the comments
from this old thread have not been addressed, instead the
new approach just made the thing more complex and
put in ipc/ instead of net/ to bypass the guards.
Thanks,
Johannes
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists