lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQL=BXwNUSjjj8t0B6yenC32-Me_B7BLsLv9pfOeg5mkfg@mail.gmail.com>
Date:   Sun, 4 Jul 2021 07:23:19 -0700
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Martin KaFai Lau <kafai@...com>
Cc:     "David S. Miller" <davem@...emloft.net>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Network Development <netdev@...r.kernel.org>,
        bpf <bpf@...r.kernel.org>, Kernel Team <kernel-team@...com>
Subject: Re: [PATCH v4 bpf-next 2/9] bpf: Add map side support for bpf timers.

On Thu, Jul 1, 2021 at 11:23 PM Martin KaFai Lau <kafai@...com> wrote:
>
> On Thu, Jul 01, 2021 at 12:20:37PM -0700, Alexei Starovoitov wrote:
> [ ... ]
>
> > +static void htab_free_prealloced_timers(struct bpf_htab *htab)
> > +{
> > +     u32 num_entries = htab->map.max_entries;
> > +     int i;
> > +
> > +     if (likely(!map_value_has_timer(&htab->map)))
> > +             return;
> > +     if (htab_has_extra_elems(htab))
> > +             num_entries += num_possible_cpus();
> > +
> > +     for (i = 0; i < num_entries; i++) {
> > +             struct htab_elem *elem;
> > +
> > +             elem = get_htab_elem(htab, i);
> > +             bpf_timer_cancel_and_free(elem->key +
> > +                                       round_up(htab->map.key_size, 8) +
> > +                                       htab->map.timer_off);
> > +             cond_resched();
> > +     }
> > +}
> > +
> [ ... ]
>
> > +static void htab_free_malloced_timers(struct bpf_htab *htab)
> > +{
> > +     int i;
> > +
> > +     for (i = 0; i < htab->n_buckets; i++) {
> > +             struct hlist_nulls_head *head = select_bucket(htab, i);
> > +             struct hlist_nulls_node *n;
> > +             struct htab_elem *l;
> > +
> > +             hlist_nulls_for_each_entry(l, n, head, hash_node)
> It is called from map_release_uref() which is not under rcu.
> Either a bucket lock or rcu_read_lock is needed here.

yeah. rcu_read_lock should do it.

> Another question, can prealloc map does the same thing
> like here (i.e. walk the buckets) during map_release_uref()?

you mean instead of for (i = 0; i < num_entries; i++) ?
It can, but it's slower than for loop and there was already a precedent
with similar loop to free per-cpu bits.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ