[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1245923790.12994.9.camel@localhost.localdomain>
Date: Thu, 25 Jun 2009 11:56:30 +0200
From: Marcel Holtmann <marcel@...tmann.org>
To: Alan Cox <alan@...rguk.ukuu.org.uk>
Cc: Daniel Walker <dwalker@...o99.com>,
Linus Walleij <linus.ml.walleij@...il.com>,
Brian Swetland <swetland@...gle.com>,
Arve Hjønnevåg <arve@...roid.com>,
Jeremy Fitzhardinge <jeremy@...p.org>,
Greg Kroah-Hartman <greg@...ah.com>,
linux-kernel@...r.kernel.org, hackbod@...roid.com
Subject: Re: [PATCH 1/6] staging: android: binder: Remove some funny &&
usage
Hi Alan,
> > > What I really want to know, is how this relates to the vmsplice() and
> > > other zero-copy buffer passing schemes already in the kernel. I was
> > > sort of dreaming that D-Bus and other IPC could be accelerated on
> > > top of that.
> >
> > Marcel had mentioned earlier in this thread that D-Bus could be
> > accelerated with shared memory or moving the dbus-daemon into the
> > kernel. splice() and vmplice() seem like fairly robust system calls. I
> > would think they could be used also ..
>
> Except for very large amounts of data what makes you think zero copy
> buffer passing will be fast ? TLB shootdowns are expensive and they scale
> horribly badly with threaded apps on multiprocessor systems ?
there is always the problem if we have really stupidly written apps that
just copy megabyte of data from one app to the other. These just need
fixing and Lennart posted patches to integrate file descriptor passing
into the D-Bus protocol which will make it more versatile. And for most
cases it will be just good enough to move file descriptors around. Then
having the possibility to pass bigger data blobs without too many
penalties would be an extra bonus.
Regards
Marcel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists