[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 07 Jul 2015 10:17:32 -0600
From: David Ahern <dsahern@...il.com>
To: Andy Lutomirski <luto@...capital.net>,
Andrew Vagin <avagin@...n.com>
CC: Andrey Vagin <avagin@...nvz.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Linux API <linux-api@...r.kernel.org>,
Oleg Nesterov <oleg@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Cyrill Gorcunov <gorcunov@...nvz.org>,
Pavel Emelyanov <xemul@...allels.com>,
Roger Luethi <rl@...lgate.ch>, Arnd Bergmann <arnd@...db.de>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Pavel Odintsov <pavel.odintsov@...il.com>
Subject: Re: [PATCH 0/24] kernel: add a netlink interface to get information
about processes (v2)
On 7/7/15 9:56 AM, Andy Lutomirski wrote:
> Netlink is fine for these use cases (if they were related to the
> netns, not the pid ns or user ns), and it works. It's still tedious
> -- I bet that if you used a syscall, the user code would be
> considerable shorter, though. :)
>
> How would this be a problem if you used plain syscalls? The user
> would make a request, and the syscall would tell the user that their
> result buffer was too small if it was, in fact, too small.
It will be impossible to tell a user what sized buffer is needed. The
size is largely a function of the number of tasks and number of maps per
thread group and both of those will be changing. With the growing size
of systems (I was sparc systems with 1024 cpus) the workload can be 10's
of thousands of tasks each with a lot of maps (e.g., java workloads).
That amounts to a non-trivial amount of data that has to be pushed to
userspace.
One of the benefits of the netlink approach is breaking the data across
multiple messages and picking up where you left off. That infrastructure
is already in place.
David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists