[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CADa=RywTEdbTdHHi=Qh6MRrHRUDUKZfPGU---Ea+-RF7+t+o+A@mail.gmail.com>
Date: Sat, 1 Apr 2023 11:57:27 -0700
From: Joe Stringer <joe@...valent.com>
To: Martin KaFai Lau <martin.lau@...ux.dev>
Cc: linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
ast@...nel.org, corbet@....net, bagasdotme@...il.com,
maxtram95@...il.com, bpf@...r.kernel.org
Subject: Re: [PATCH bpf-next v3] docs/bpf: Add LRU internals description and graph
On Thu, Mar 16, 2023 at 11:05 PM Martin KaFai Lau <martin.lau@...ux.dev> wrote:
>
> On 3/15/23 6:54 PM, Joe Stringer wrote:
> > On Tue, Mar 14, 2023 at 12:31 PM Martin KaFai Lau <martin.lau@...ux.dev> wrote:
> >>
> >> Maybe a note somewhere to mention why it will still fail to
> >> shrink the list because the htab_lock_bucket() have detected potential
> >> deadlock/recursion which is a very unlikely case.
> >
> > I missed the "shrink the list" link here since it seems like this
> > could happen for any combination of update or delete elems for the
> > same bucket. But yeah given that also needs to happen on the same CPU,
> > it does seem very unlikely...
>
> shrink should try to shrink a couple of stale elems which are likely hashed to
> different buckets. If shrink hits htab_lock_bucket() EBUSY on all stale elems,
> shrink could fail but unlikely.
The failure case I had in mind for this is to assume that shrinking
succeeds and we find an LRU node during the htab_map_update_elem()
call through prealloc_lru_pop(), but then immediately afterwards it
makes a direct htab_lock_bucket() call which has just one chance to
succeed based on whether this CPU races against some other user of the
bucket lock. Still seems somewhat rare, but feasible to hit.
> > Could there be a case something like "userspace process is touching that bucket,
> > gets interrupted, then the same CPU runs an eBPF program that attempts to
> > update/delete elements in the same bucket"?
>
> raw_spin_lock_irqsave() is done after the percpu counter, so there is a gap but
> not sure if it matters though unless the production use case can really hit it.
Yeah unfortunately I'm going off an incident from last year and I
don't have this level of visibility into the failure scenario in a
prod-like environment today.
Powered by blists - more mailing lists