[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQ+0kTYr2azBW6mDSU6JwWtjWRZVaJQ=ZFmSs3JxCFrrRg@mail.gmail.com>
Date: Mon, 16 Oct 2017 15:10:06 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Richard Weinberger <richard@....at>
Cc: Daniel Borkmann <daniel@...earbox.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>
Subject: Re: [PATCH 3/3] bpf: Make sure that ->comm does not change under us.
On Mon, Oct 16, 2017 at 2:10 PM, Richard Weinberger <richard@....at> wrote:
> Am Montag, 16. Oktober 2017, 23:02:06 CEST schrieb Daniel Borkmann:
>> On 10/16/2017 10:55 PM, Richard Weinberger wrote:
>> > Am Montag, 16. Oktober 2017, 22:50:43 CEST schrieb Daniel Borkmann:
>> >>> struct task_struct *task = current;
>> >>>
>> >>> + task_lock(task);
>> >>>
>> >>> strncpy(buf, task->comm, size);
>> >>>
>> >>> + task_unlock(task);
>> >>
>> >> Wouldn't this potentially lead to a deadlock? E.g. you attach yourself
>> >> to task_lock() / spin_lock() / etc, and then the BPF prog triggers the
>> >> bpf_get_current_comm() taking the lock again ...
>> >
>> > Yes, but doesn't the same apply to the use case when I attach to strncpy()
>> > and run bpf_get_current_comm()?
>>
>> You mean due to recursion? In that case trace_call_bpf() would bail out
>> due to the bpf_prog_active counter.
>
> Ah, that's true.
> So, when someone wants to use bpf_get_current_comm() while tracing task_lock,
> we have a problem. I agree.
> On the other hand, without locking the function may return wrong results.
it will surely race with somebody else setting task comm and it's fine.
all of bpf tracing is read-only, so locks are only allowed inside bpf core
bits like maps. Taking core locks like task_lock() is quite scary.
bpf scripts rely on bpf_probe_read() of all sorts of kernel fields
so reading comm here w/o lock is fine.
Powered by blists - more mailing lists