[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210211023511.GE286763@balbir-desktop>
Date: Thu, 11 Feb 2021 13:35:11 +1100
From: Balbir Singh <bsingharora@...il.com>
To: Weiping Zhang <zwp10758@...il.com>
Cc: sblbir@...zon.com, davem@...emloft.net,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v2] taskstats: add /proc/taskstats to fetch pid/tgid
status
> > Still not convinced about it, I played around with it. The reason we did not
> > use ioctl in the first place is to get the benefits of TLA with netlink, which
> For monitoring long-time-running process the ioctl can meet our requirement,
> it is more simple than netlink when we get the real user data(struct taskstats).
> The netlink mode needs construct/parse extra strcutures like struct msgtemplate,
> struct nlmsghdr, struct genlmsghdr. The ioctl mode only has one
> structure (struct taskstats).
> For complicated user case the netlink mode is more suitable, for this
> simple user case
> the ioctl mode is more suitable. From the test results we can see that
> ioctl can save CPU
> resource, it's useful to build a light-weight monitor tools.
I think your missing the value of TLA and the advantages of async
send vs recv
> > ioctl's miss. IMHO, the overhead is not very significant even for
> > 10,000 processes in your experiment. I am open to considering enhancing the
> > interface to do a set of pid's.
> It's a good approach to collect data in batch mode, I think we can support it in
> both netlink and ioctl mode.
>
> Add ioctl can give user mode choice and make user code more simple, it seems no
> harm to taskstats framework, I'd like to support it.
>
> Thanks very much
In general the ioctl interface is quite fragile, conflicts in ioctl numbers,
inability to check the types of the parameters passed in and out makes it
not so good. Not to mention versioning issues, with the genl interface we have
the flexibility to version requests. I would really hate to have two ways to
do the same thing.
The overhead is there, do you see the overhead of 20ms per 10,000 calls significant?
Does it affect your use case significantly?
Balbir Singh
Powered by blists - more mailing lists