lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <553A1AE6.9060902@zonque.org>
Date:	Fri, 24 Apr 2015 12:28:54 +0200
From:	Daniel Mack <daniel@...que.org>
To:	Borislav Petkov <bp@...en8.de>,
	Steven Noonan <steven@...inklabs.net>
CC:	David Herrmann <dh.herrmann@...il.com>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Arnd Bergmann <arnd@...db.de>,
	One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>,
	Tom Gundersen <teg@...m.no>, Jiri Kosina <jkosina@...e.cz>,
	Andy Lutomirski <luto@...capital.net>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Djalal Harouni <tixxdz@...ndz.org>
Subject: Re: [GIT PULL] kdbus for 4.1-rc1

Hi,

On 04/24/2015 11:04 AM, Borislav Petkov wrote:
> On Thu, Apr 23, 2015 at 10:02:52PM -0700, Steven Noonan wrote:
>> On Thu, Apr 23, 2015 at 2:41 PM, Borislav Petkov <bp@...en8.de> wrote:
>>> On Thu, Apr 23, 2015 at 11:22:39PM +0200, David Herrmann wrote:
>>>> No it's not. O(256) equals O(1).
>>>
>>> Ok, you're right. Maybe O() was not the right thing to use when trying
>>> to point out that iterating over 256 hash buckets and then following the
>>> chain in each bucket per packet broadcast looks like a lot.
>>>
>>
>> Heh. I guess you could call it an "expensive O(1)". While big-O
>> notation is useful for describing algorithm scalability with respect
>> to input size, it falls flat on its face when trying to articulate
>> impact in measurable units.
> 
> Right, so in thinking about this more today, on a fresh head, it still
> is O(n) because we do broadcast the packet to n recipients - the
> hash_for_each() thing iterates over 256 hash buckets and also follows
> the linked list chain in each bucket. Its length is depending on how
> many connections are in the bucket, i.e. recipients. And I'd guess that
> number changes dynamically so probably linear.

Sure, for broadcasts, we have to walk the list of peers connected to the
bus and see which one is interested in a particular message. We do that
by looking at the match rules of each of them, which are based on
well-known names, IDs, notification types or bloom filters. The policy
logic limits this further, as receivers of a broadcast must have TALK
access to the sender.

If these rules let a message pass, all the metadata that the receiving
peer asked for (by setting a flag at connect time) is collected, unless
it has been collected already for some other peer for the same message.
In other words, in worst case, we collect all the metadata items exactly
once per message.

If none of the connections with permissive match/policy rules for a
message is interested in any metadata items, nothing will be collected
at all.

The reason why the peers are organized in a hash table is that we have
to look them up by ID for unicast messages.

> And then there's the collection of, let's call it metadata of
> questionable use, *per* packet which is pretty expensive in my book.
> It becomes even more expensive if it is completely useless as in, the
> receiving side doesn't need it all.

If the receiving side doesn't need it, it shouldn't opt-in for that
piece of information.

The metadata logic is really only there so receiving peers are directly
supplied with information that they would otherwise look up themselves
from /proc or something. Also, we collect metadata at send time and for
every message intentionally, so that it reflects the state of the sender
at the time of sending. This way, the information is not subject to
races of asynchronous lookups.

> Now, one might argue that you have to do O(n) work when broadcasting
> to n recipients anyway and you can't get that cheaper but maybe the
> design is not optimal. Maybe it could be made to not broadcast at all,
> or broadcast to a subset of recipients, only those which are actually
> interested in the broadcast.

That's exactly what happens :) There are some more details on this in
kdbus.match(7).


Thanks,
Daniel

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ