lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <650b34ca2b41c_4e8122080@john.notmuch>
Date: Wed, 20 Sep 2023 11:07:06 -0700
From: John Fastabend <john.fastabend@...il.com>
To: Kui-Feng Lee <sinquersw@...il.com>, 
 Ma Ke <make_ruc2021@....com>, 
 john.fastabend@...il.com, 
 jakub@...udflare.com, 
 davem@...emloft.net, 
 edumazet@...gle.com, 
 kuba@...nel.org, 
 pabeni@...hat.com
Cc: netdev@...r.kernel.org, 
 bpf@...r.kernel.org
Subject: Re: [PATCH] bpf, sockmap: fix deadlocks in the sockhash and sockmap

Kui-Feng Lee wrote:
> 
> 
> On 9/18/23 02:36, Ma Ke wrote:
> > It seems that elements in sockhash are rarely actively
> > deleted by users or ebpf program. Therefore, we do not

We never delete them in our usage. I think soon we will have
support to run BPF programs without a map at all removing these
concerns for many use cases.

> > pay much attention to their deletion. Compared with hash
> > maps, sockhash only provides spin_lock_bh protection.
> > This causes it to appear to have self-locking behavior
> > in the interrupt context, as CVE-2023-0160 points out.

CVE is a bit exagerrated in my opinion. I'm not sure why
anyone would delete an element from interrupt context. But,
OK if someone wrote such a thing we shouldn't lock up.

> > 
> > Signed-off-by: Ma Ke <make_ruc2021@....com>
> > ---
> >   net/core/sock_map.c | 5 +++--
> >   1 file changed, 3 insertions(+), 2 deletions(-)
> > 
> > diff --git a/net/core/sock_map.c b/net/core/sock_map.c
> > index cb11750b1df5..1302d484e769 100644
> > --- a/net/core/sock_map.c
> > +++ b/net/core/sock_map.c
> > @@ -928,11 +928,12 @@ static long sock_hash_delete_elem(struct bpf_map *map, void *key)
> >   	struct bpf_shtab_bucket *bucket;
> >   	struct bpf_shtab_elem *elem;
> >   	int ret = -ENOENT;
> > +	unsigned long flags;
> 
> Keep reverse xmas tree ordering?
> 
> >   
> >   	hash = sock_hash_bucket_hash(key, key_size);
> >   	bucket = sock_hash_select_bucket(htab, hash);
> >   
> > -	spin_lock_bh(&bucket->lock);
> > +	spin_lock_irqsave(&bucket->lock, flags);

The hashtab code htab_lock_bucket also does a preempt_disable()
followed by raw_spin_lock_irqsave(). Do we need this as well
to handle the PREEMPT_CONFIG cases.

I'll also take a look, but figured I would post the question given
I wont likely get time to check until tonight/tomorrow.

Also converting to irqsave before ran into syzbot crash wont this do the
same?

> >   	elem = sock_hash_lookup_elem_raw(&bucket->head, hash, key, key_size);
> >   	if (elem) {
> >   		hlist_del_rcu(&elem->node);
> > @@ -940,7 +941,7 @@ static long sock_hash_delete_elem(struct bpf_map *map, void *key)
> >   		sock_hash_free_elem(htab, elem);
> >   		ret = 0;
> >   	}
> > -	spin_unlock_bh(&bucket->lock);
> > +	spin_unlock_irqrestore(&bucket->lock, flags);
> >   	return ret;
> >   }
> >   

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ