lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190116233029.xdaddrfehfjk4hrx@ast-mbp>
Date:   Wed, 16 Jan 2019 15:30:31 -0800
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Daniel Borkmann <daniel@...earbox.net>
Cc:     Alexei Starovoitov <ast@...nel.org>, davem@...emloft.net,
        jakub.kicinski@...ronome.com, netdev@...r.kernel.org,
        kernel-team@...com
Subject: Re: [PATCH bpf-next 1/9] bpf: introduce bpf_spin_lock

On Thu, Jan 17, 2019 at 12:16:44AM +0100, Daniel Borkmann wrote:
> On 01/16/2019 11:48 PM, Daniel Borkmann wrote:
> > On 01/16/2019 06:08 AM, Alexei Starovoitov wrote:
> [...]
> >> @@ -6096,6 +6226,11 @@ static int do_check(struct bpf_verifier_env *env)
> >>  					return -EINVAL;
> >>  				}
> >>  
> >> +				if (env->cur_state->active_spin_lock) {
> >> +					verbose(env, "bpf_spin_unlock is missing\n");
> >> +					return -EINVAL;
> >> +				}
> >> +
> >>  				if (state->curframe) {
> >>  					/* exit from nested function */
> >>  					env->prev_insn_idx = env->insn_idx;
> > 
> > I think if I'm not mistaken there should still be a possibility for causing a
> > deadlock, namely if in the middle of the critical section I'm using an LD_ABS
> > or LD_IND instruction with oob index such that I cause an implicit return 0
> > while lock is held. At least I don't see this being caught, probably also for
> > such case a test_verifier snippet would be good.
> > 
> > Wouldn't we also need to mark queued spinlock functions as notrace such that
> > e.g. from kprobe one cannot attach to these causing a deadlock?
> 
> I think there may be another problem: haven't verified, but it might be possible
> at least from reading the code that I have two programs which share a common
> array/hash with spin_lock in BTF provided. Program A is properly using spin_lock
> as in one of your examples. Program B is using map in map with inner map being
> that same map using spin_lock. When we return that fake inner_map_meta as
> reg->map_ptr then we can bypass any read/write restrictions into spin_lock area
> which is normally prevented by verifier. Meaning, map in map needs to be made
> aware of spin_lock case as well.

2nd great catch. thanks!
Indeed inner_map_meta doesn't preserve all the fields from struct bpf_map.
It seems long term we'll be able to support spin_lock in inner map too,
but for now I'll disable it.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ