[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <db1be25a-f921-0e81-3f09-95ca223233b6@iogearbox.net>
Date: Thu, 17 Jan 2019 00:16:44 +0100
From: Daniel Borkmann <daniel@...earbox.net>
To: Alexei Starovoitov <ast@...nel.org>, davem@...emloft.net
Cc: jakub.kicinski@...ronome.com, netdev@...r.kernel.org,
kernel-team@...com
Subject: Re: [PATCH bpf-next 1/9] bpf: introduce bpf_spin_lock
On 01/16/2019 11:48 PM, Daniel Borkmann wrote:
> On 01/16/2019 06:08 AM, Alexei Starovoitov wrote:
[...]
>> @@ -6096,6 +6226,11 @@ static int do_check(struct bpf_verifier_env *env)
>> return -EINVAL;
>> }
>>
>> + if (env->cur_state->active_spin_lock) {
>> + verbose(env, "bpf_spin_unlock is missing\n");
>> + return -EINVAL;
>> + }
>> +
>> if (state->curframe) {
>> /* exit from nested function */
>> env->prev_insn_idx = env->insn_idx;
>
> I think if I'm not mistaken there should still be a possibility for causing a
> deadlock, namely if in the middle of the critical section I'm using an LD_ABS
> or LD_IND instruction with oob index such that I cause an implicit return 0
> while lock is held. At least I don't see this being caught, probably also for
> such case a test_verifier snippet would be good.
>
> Wouldn't we also need to mark queued spinlock functions as notrace such that
> e.g. from kprobe one cannot attach to these causing a deadlock?
I think there may be another problem: haven't verified, but it might be possible
at least from reading the code that I have two programs which share a common
array/hash with spin_lock in BTF provided. Program A is properly using spin_lock
as in one of your examples. Program B is using map in map with inner map being
that same map using spin_lock. When we return that fake inner_map_meta as
reg->map_ptr then we can bypass any read/write restrictions into spin_lock area
which is normally prevented by verifier. Meaning, map in map needs to be made
aware of spin_lock case as well.
Thanks,
Daniel
Powered by blists - more mailing lists