[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210618132636.ceef49ba0fd01bd26508f672@kernel.org>
Date: Fri, 18 Jun 2021 13:26:36 +0900
From: Masami Hiramatsu <mhiramat@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: "Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Anton Blanchard <anton@...abs.org>,
linux-kernel@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 2/2] trace/kprobe: Remove limit on kretprobe maxactive
On Thu, 17 Jun 2021 13:07:13 -0400
Steven Rostedt <rostedt@...dmis.org> wrote:
> On Thu, 17 Jun 2021 22:04:34 +0530
> "Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com> wrote:
>
> > > 2. Move the kretprobe instance pool from kretprobe to struct task.
> > > This pool will allocates one page per task, and shared among all
> > > kretprobes. This pool will be allocated when the 1st kretprobe
> > > is registered. maxactive will be kept for someone who wants to
> > > use per-instance data. But since dynamic event doesn't use it,
> > > it will be removed from tracefs and perf.
> >
> > Won't this result in _more_ memory usage compared to what we have now?
>
> Maybe or maybe not. At least with this approach (or the function graph
> one), you will allocate enough for the environment involved. If there's
> thousands of tasks, then yes, it will allocate more memory. But if you are
> running thousands of tasks, you should have a lot of memory in the machine.
>
> If you are only running a few tasks, it will be less than the current
> approach.
Right, this depends on how many tasks you are running on your machine.
Anyway, since you may not sure how much maxactive is enough, you will
set maxactive high, then it can consume more than that. Of course you
can optimize by trial and error. But that does not guarantee all cases,
because the number of tasks can be increased while tracing. You might
need to re-configure it by checking the nmissed count again.
Thank you,
--
Masami Hiramatsu <mhiramat@...nel.org>
Powered by blists - more mailing lists