[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240711150054.GA3285@noisy.programming.kicks-ass.net>
Date: Thu, 11 Jul 2024 17:00:54 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Oleg Nesterov <oleg@...hat.com>
Cc: mingo@...nel.org, andrii@...nel.org, linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org, rostedt@...dmis.org,
mhiramat@...nel.org, jolsa@...nel.org, clm@...a.com,
paulmck@...nel.org
Subject: Re: [PATCH v2 11/11] perf/uprobe: Add uretprobe timer
On Thu, Jul 11, 2024 at 03:19:19PM +0200, Oleg Nesterov wrote:
> Not sure I read this patch correctly, but at first glance it looks
> suspicious..
>
> On 07/11, Peter Zijlstra wrote:
> >
> > +static void return_instance_timer(struct timer_list *timer)
> > +{
> > + struct uprobe_task *utask = container_of(timer, struct uprobe_task, ri_timer);
> > + task_work_add(utask->task, &utask->ri_task_work, TWA_SIGNAL);
> > +}
>
> What if utask->task sleeps in TASK_STOPPED/TASK_TRACED state before
> return from the ret-probed function?
>
> In this case it won't react to TWA_SIGNAL until debugger or SIGCONT
> wakes it up.
Or FROZEN etc.. Yeah.
> ---------------------------------------------------------------------------
> And it seems that even task_work_add() itself is not safe...
>
> Suppose we have 2 ret-probed functions
>
> void f2() { ... }
> void f1() { ...; f2(); }
>
> A task T calls f1(), hits the bp, and calls prepare_uretprobe() which does
>
> mod_timer(&utask->ri_timer, jiffies + HZ);
>
> Then later it calls f2() and the pending timer expires after it enters the
> kernel, but before the next prepare_uretprobe() -> mod_timer().
>
> In this case ri_task_work is already queued and the timer is pending again.
You're saying we can hit a double enqueue, right? Yeah, that's a
problem. But that can be fairly easily rectified.
> Now. Even if T goes to the exit_to_user_mode_loop() path immediately, in
> theory nothing can guarantee that it will call get_signal/task_work_run
> in less than 1 second, it can be preempted.
>
> But T can sleep in xol_take_insn_slot() before return from handle_swbp(),
> and this is not so theoretical.
So the assumption is that kernel code makes forward progress. If we get
preempted, we'll get scheduled again. If the machine is so overloaded
this takes more than a second, stretching the SRCU period is the least
of your problems.
Same with sleeps, it'll get a wakeup.
The only thing that is out of our control is userspace. And yes, I had
not considered STOPPED/TRACED/FROZEN.
So the reason I did that task_work is because the return_instance list
is strictly current, so a random timer cannot safely poke at it. And
barring those pesky states, it does as desired.
Let me ponder that a little, I *can* make it work, but all 'solutions'
I've come up with so far are really rather vile.
Powered by blists - more mailing lists