[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150427171804.GA26277@kroah.com>
Date: Mon, 27 Apr 2015 19:18:04 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: Lukasz Skalski <l.skalski@...sung.com>
Cc: Havoc Pennington <hp@...ox.com>,
Andy Lutomirski <luto@...capital.net>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Arnd Bergmann <arnd@...db.de>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>,
Tom Gundersen <teg@...m.no>, Jiri Kosina <jkosina@...e.cz>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Daniel Mack <daniel@...que.org>,
David Herrmann <dh.herrmann@...il.com>,
Djalal Harouni <tixxdz@...ndz.org>
Subject: Re: [GIT PULL] kdbus for 4.1-rc1
On Mon, Apr 27, 2015 at 10:57:45AM +0200, Lukasz Skalski wrote:
> On 04/24/2015 09:25 PM, Greg Kroah-Hartman wrote:
> > On Fri, Apr 24, 2015 at 04:34:34PM +0200, Lukasz Skalski wrote:
> >> On 04/24/2015 04:19 PM, Havoc Pennington wrote:
> >>> On Fri, Apr 24, 2015 at 9:50 AM, Lukasz Skalski <l.skalski@...sung.com> wrote:
> >>>> - client: http://fpaste.org/215156/
> >>>>
> >>>
> >>> Cool - it might also be interesting to try this without blocking round
> >>> trips, i.e. send requests as quickly as you can, and collect replies
> >>> asynchronously. That's how people ideally use dbus. It should
> >>> certainly reduce the total benchmark time, but just wondering if this
> >>> usage increases or decreases the delta between userspace daemon and
> >>> kdbus.
> >>
> >> No problem - I'll prepare also asynchronous version.
> >
> > That would be great to see as well. Many thanks for doing this work.
>
> As it was proposed by Havoc and Greg I've created simple benchmark for
> asynchronous calls:
>
> - server: http://fpaste.org/215157/ (the same as in the previous test)
> - client: http://fpaste.org/215724/ (asynchronous version)
>
> For asynchronous version of client I had to decrease number of calls to
> 128 (for synchronous version it was x20000 calls), otherwise we can
> exceed the maximum number of pending replies per connection.
>
> The test results are following:
>
> +--------------+--------------------+--------------------+
> | | Elapsed time | Elapsed time |
> | Message size | GLIB WITH NATIVE | GLIB + DBUS-DAEMON |
> | [bytes] | KDBUS SUPPORT* | |
> +--------------+--------------------+--------------------+
> | | 1) 0.018639 s | 1) 0.029947 s |
> | 1000 | 2) 0.017045 s | 2) 0.032812 s |
> | | 3) 0.017490 s | 3) 0.029971 s |
> | | 4) 0.018001 s | 4) 0.026485 s |
> +--------------+--------------------+--------------------+
> | | 3) 0.019898 s | 3) 0.040914 s |
> | 10000 | 3) 0.022187 s | 3) 0.033604 s |
> | | 3) 0.020854 s | 3) 0.037616 s |
> | | 3) 0.020020 s | 3) 0.033772 s |
> +--------------+--------------------+--------------------+
> *all tests performed without using memfd mechanism.
>
> And as I wrote in my previous mail, kdbus transport for GLib is not
> finished yet and there are still some places for improvements, so please
> do not treat these test results as final).
Very nice, thanks. Any chance you can bump those message sizes up to
over 512k? I think that will show a huge difference. Even just under
512k should be faster, as you have shown, but I have been told that for
messages larger than 512k, the D-Bus daemon has "issues", which has kept
people from wanting to use messages that large before now.
thanks again,
greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists