lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 17 Jan 2019 01:21:32 +0100
From:   Daniel Borkmann <daniel@...earbox.net>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:     Alexei Starovoitov <ast@...nel.org>, davem@...emloft.net,
        jakub.kicinski@...ronome.com, netdev@...r.kernel.org,
        kernel-team@...com
Subject: Re: [PATCH bpf-next 1/9] bpf: introduce bpf_spin_lock

On 01/17/2019 12:30 AM, Alexei Starovoitov wrote:
> On Thu, Jan 17, 2019 at 12:16:44AM +0100, Daniel Borkmann wrote:
>> On 01/16/2019 11:48 PM, Daniel Borkmann wrote:
>>> On 01/16/2019 06:08 AM, Alexei Starovoitov wrote:
>> [...]
>>>> @@ -6096,6 +6226,11 @@ static int do_check(struct bpf_verifier_env *env)
>>>>  					return -EINVAL;
>>>>  				}
>>>>  
>>>> +				if (env->cur_state->active_spin_lock) {
>>>> +					verbose(env, "bpf_spin_unlock is missing\n");
>>>> +					return -EINVAL;
>>>> +				}
>>>> +
>>>>  				if (state->curframe) {
>>>>  					/* exit from nested function */
>>>>  					env->prev_insn_idx = env->insn_idx;
>>>
>>> I think if I'm not mistaken there should still be a possibility for causing a
>>> deadlock, namely if in the middle of the critical section I'm using an LD_ABS
>>> or LD_IND instruction with oob index such that I cause an implicit return 0
>>> while lock is held. At least I don't see this being caught, probably also for
>>> such case a test_verifier snippet would be good.
>>>
>>> Wouldn't we also need to mark queued spinlock functions as notrace such that
>>> e.g. from kprobe one cannot attach to these causing a deadlock?
>>
>> I think there may be another problem: haven't verified, but it might be possible
>> at least from reading the code that I have two programs which share a common
>> array/hash with spin_lock in BTF provided. Program A is properly using spin_lock
>> as in one of your examples. Program B is using map in map with inner map being
>> that same map using spin_lock. When we return that fake inner_map_meta as
>> reg->map_ptr then we can bypass any read/write restrictions into spin_lock area
>> which is normally prevented by verifier. Meaning, map in map needs to be made
>> aware of spin_lock case as well.
> 
> 2nd great catch. thanks!
> Indeed inner_map_meta doesn't preserve all the fields from struct bpf_map.
> It seems long term we'll be able to support spin_lock in inner map too,
> but for now I'll disable it.

There's also one more potential issue in pruning I _think_. In regsafe() we
make the basic assumption that for PTR_TO_MAP_VALUE id has been zeroed which
is true up to here, and as such we prune state not taking id into account.
The only other case we have is PTR_TO_SOCKET{,_OR_NULL} which only allows
for exact matches. Potentially there could be a case where you have two map
pointers from different branches but with same basic map properties read/
writing map data, and in first run for PTR_TO_MAP_VALUE w/o spin_lock path
it was considered safe such that we would get a match in regsafe() as well
and could potentially prune the access? I guess definitely worth adding such
test case to test_verifier to make sure.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ