[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c62985530811211139l78529b8y1a7881e8693c2728@mail.gmail.com>
Date: Fri, 21 Nov 2008 20:39:58 +0100
From: "Frédéric Weisbecker" <fweisbec@...il.com>
To: "Ingo Molnar" <mingo@...e.hu>
Cc: "Steven Rostedt" <rostedt@...dmis.org>,
"Linux Kernel" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/3] tracing/function-return-tracer: add the overrun field
2008/11/18 Ingo Molnar <mingo@...e.hu>:
> hey, look at the bright side of it: whichever variant you pick, if it
> goes wrong, you'll always have someone else to blame for that
> unbelievably stupid suggestion ;-)
Actually, since the current implementation of function-return-tracer
works well with static arrays of return stack,
I guess the moving of this array to struct task would not be a real
problem. I think I should directly start with
the dynamic arrays.
But I'm facing a problem with the allocation of these arrays.
When the tracer will be launched, I will hold the tasklist_lock to
allocate/insert the dynamic arrays.
So in this atomic context, I will not be able to call kmalloc with
GFP_KERNEL. And I fear that using GFP_ATOMIC
for possible hundreds of tasks would be clearly unacceptable.
What do you think of this way:
_tracer activates
_a function enters the tracer entry-hooker. If the array is allocated
for the current task, that's well. If not I launch a kernel thread
that will later allocate an array for the current task (I will pass
the pid as a parameter). So the current task will be soon be traced.
_ when a process forks, I can allocate a dynamic array for the new
task without problem (I hope).
So some tasks will not be traced at the early beggining of tracing but
they will soon all be traced....
There is perhaps a problem with tasks that are sleeping for long
times... There will be some losses once they will be awaken...
What do you think?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists