[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <650bcc10d2735_7d31e208e7@john.notmuch>
Date: Wed, 20 Sep 2023 21:52:32 -0700
From: John Fastabend <john.fastabend@...il.com>
To: Martin KaFai Lau <martin.lau@...ux.dev>,
John Fastabend <john.fastabend@...il.com>
Cc: netdev@...r.kernel.org,
bpf@...r.kernel.org,
Kui-Feng Lee <sinquersw@...il.com>,
Ma Ke <make_ruc2021@....com>,
jakub@...udflare.com,
davem@...emloft.net,
edumazet@...gle.com,
kuba@...nel.org,
pabeni@...hat.com
Subject: Re: [PATCH] bpf, sockmap: fix deadlocks in the sockhash and sockmap
Martin KaFai Lau wrote:
> On 9/20/23 11:07 AM, John Fastabend wrote:
> >>> pay much attention to their deletion. Compared with hash
> >>> maps, sockhash only provides spin_lock_bh protection.
> >>> This causes it to appear to have self-locking behavior
> >>> in the interrupt context, as CVE-2023-0160 points out.
> >
> > CVE is a bit exagerrated in my opinion. I'm not sure why
> > anyone would delete an element from interrupt context. But,
> > OK if someone wrote such a thing we shouldn't lock up.
>
> This should only happen in tracing program?
> not sure if it will be too drastic to disallow tracing program to use
> bpf_map_delete_elem during load time now.
I don't think we have any users from tracing programs, but
might be something out there?
>
> A followup question, if sockmap can be accessed from tracing program, does it
> need an in_nmi() check?
I think we could just do 'in_nmi(); return EOPNOTSUPP;'
>
> >>> hash = sock_hash_bucket_hash(key, key_size);
> >>> bucket = sock_hash_select_bucket(htab, hash);
> >>>
> >>> - spin_lock_bh(&bucket->lock);
> >>> + spin_lock_irqsave(&bucket->lock, flags);
> >
> > The hashtab code htab_lock_bucket also does a preempt_disable()
> > followed by raw_spin_lock_irqsave(). Do we need this as well
> > to handle the PREEMPT_CONFIG cases.
>
> iirc, preempt_disable in htab is for the CONFIG_PREEMPT but it is for the
> __this_cpu_inc_return to avoid unnecessary lock failure due to preemption, so
> probably it is not needed here. The commit 2775da216287 ("bpf: Disable
> preemption when increasing per-cpu map_locked")
>
> If map_delete can be called from any tracing context, the raw_spin_lock_xxx
> version is probably needed though. Otherwise, splat (e.g.
> PROVE_RAW_LOCK_NESTING) could be triggered.
Yep. I'll look at it I guess. We should probably either block
access from tracing programs or add some tests.
Powered by blists - more mailing lists