[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2144178.MA8iAIlUE0@blindfold>
Date: Mon, 16 Oct 2017 23:10:37 +0200
From: Richard Weinberger <richard@....at>
To: Daniel Borkmann <daniel@...earbox.net>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
ast@...nel.org
Subject: Re: [PATCH 3/3] bpf: Make sure that ->comm does not change under us.
Am Montag, 16. Oktober 2017, 23:02:06 CEST schrieb Daniel Borkmann:
> On 10/16/2017 10:55 PM, Richard Weinberger wrote:
> > Am Montag, 16. Oktober 2017, 22:50:43 CEST schrieb Daniel Borkmann:
> >>> struct task_struct *task = current;
> >>>
> >>> + task_lock(task);
> >>>
> >>> strncpy(buf, task->comm, size);
> >>>
> >>> + task_unlock(task);
> >>
> >> Wouldn't this potentially lead to a deadlock? E.g. you attach yourself
> >> to task_lock() / spin_lock() / etc, and then the BPF prog triggers the
> >> bpf_get_current_comm() taking the lock again ...
> >
> > Yes, but doesn't the same apply to the use case when I attach to strncpy()
> > and run bpf_get_current_comm()?
>
> You mean due to recursion? In that case trace_call_bpf() would bail out
> due to the bpf_prog_active counter.
Ah, that's true.
So, when someone wants to use bpf_get_current_comm() while tracing task_lock,
we have a problem. I agree.
On the other hand, without locking the function may return wrong results.
Thanks,
//richard
Powered by blists - more mailing lists