[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGXxSxU7jMRC6u5nnFovV2TUpNGC1qp7Kwrp1uwjG1JBg+Pfzg@mail.gmail.com>
Date: Fri, 7 Aug 2015 23:37:35 +0800
From: cee1 <fykcee1@...il.com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Cc: Daniel Mack <daniel@...que.org>,
David Herrmann <dh.herrmann@...il.com>,
Tom Gundersen <teg@...m.no>,
"Kalle A. Sandstrom" <ksandstr@....fi>,
Greg KH <gregkh@...uxfoundation.org>,
Borislav Petkov <bp@...en8.de>,
One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>,
Havoc Pennington <havoc.pennington@...il.com>,
Djalal Harouni <tixxdz@...ndz.org>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andy Lutomirski <luto@...capital.net>
Subject: Re: kdbus: to merge or not to merge?
2015-08-07 2:43 GMT+08:00 Andy Lutomirski <luto@...capital.net>:
> On Thu, Aug 6, 2015 at 11:14 AM, Daniel Mack <daniel@...que.org> wrote:
>> On 08/06/2015 05:21 PM, Andy Lutomirski wrote:
>>> Maybe gdbus really does use kdbus already, but on
>>> very brief inspection it looked like it didn't at least on my test VM.
>>
>> No, it's not in any released version yet. The patches for that are being
>> worked on though and look promising.
>>
>>> If the client buffers on !EPOLLOUT and has a monster buffer, then
>>> that's the client's problem.
>>>
>>> If every single program has a monster buffer, then it's everyone's
>>> problem, and the size of the problem gets multiplied by the number of
>>> programs.
>>
>> The size of the memory pool of a bus client is chosen by the client
>> itself individually during the HELLO call. It's pretty much the same as
>> if the client allocated the buffer itself, except that the kernel does
>> it on their behalf.
>>
>> Also note that kdbus features a peer-to-peer based quota accounting
>> logic, so a single bus connection can not DOS another one by filling its
>> buffer.
>
> I haven't looked at the quota code at all.
>
> Nonetheless, it looks like the slice logic (aside: it looks *way* more
> complicated than necessary -- what's wrong with circular buffers)
> will, under most (but not all!) workloads, concentrate access to a
> smallish fraction of the pool. This is IMO bad, since it means that
> most of the time most of the pool will remain uncommitted. If, at
> some point, something causes the access pattern to change and hit all
> the pages (even just once), suddenly all of the pools get committed,
> and your memory usage blows up.
>
> Again, please stop blaming the clients. In practice, kdbus is a
> system involving the kernel, systemd, sd-bus, and other stuff, mostly
> written by the same people. If kdbus gets merged and it survives but
> half the clients blow up and peoples' systems fall over, that's not
> okay.
Any comments about the questions mentioned by Andy?
In KDBUS, sender writes a page of receiver's tmpfs space, may either
helps receiver to escape its memcg limitation, or incurs receiver's
limitation?
Also, I'm curious about similar problems in these cases:
1. A UNIX domain Server (SOCK_STREAM or SOCK_DGRAM) replies to its
Clients, but some clients consume the messages __too slow__, will the
server block? Or can it serve other clients instead of blocking?
2. Open netlink sockets of NETLINK_KOBJECT_UEVENT, but some processes
consume uevent __too slow__, and uevent is continually triggered. Will
the system block? Or those processes finally lost some uevents?
3. Watch a directory via inotify, but some processes consume events
__too slow__, and file operations is continually performed against the
directory. Will the system block? Or those processes finally lost some
events?
--
Regards,
- cee1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists