[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161115013436.GA8080@ast-mbp.thefacebook.com>
Date: Mon, 14 Nov 2016 17:34:38 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Martin KaFai Lau <kafai@...com>
Cc: netdev@...r.kernel.org, David Miller <davem@...emloft.net>,
Alexei Starovoitov <ast@...com>,
Daniel Borkmann <daniel@...earbox.net>,
Kernel Team <kernel-team@...com>
Subject: Re: [PATCH v2 net-next 5/6] bpf: Add BPF_MAP_TYPE_LRU_PERCPU_HASH
On Fri, Nov 11, 2016 at 10:55:10AM -0800, Martin KaFai Lau wrote:
> Provide a LRU version of the existing BPF_MAP_TYPE_PERCPU_HASH
>
> Signed-off-by: Martin KaFai Lau <kafai@...com>
...
> + /* For LRU, we need to alloc before taking bucket's
> + * spinlock because LRU's elem alloc may need
> + * to remove older elem from htab and this removal
> + * operation will need a bucket lock.
> + */
> + if (map_flags != BPF_EXIST) {
> + l_new = prealloc_lru_pop(htab, key, hash);
> + if (!l_new)
> + return -ENOMEM;
> + }
> +
> + /* bpf_map_update_elem() can be called in_irq() */
> + raw_spin_lock_irqsave(&b->lock, flags);
> +
> + l_old = lookup_elem_raw(head, hash, key, key_size);
> +
> + ret = check_flags(htab, l_old, map_flags);
> + if (ret)
> + goto err;
> +
> + if (l_old) {
> + bpf_lru_node_set_ref(&l_old->lru_node);
> +
> + /* per-cpu hash map can update value in-place */
> + pcpu_copy_value(htab, htab_elem_get_ptr(l_old, key_size),
> + value, onallcpus);
> + } else {
> + pcpu_copy_value(htab, htab_elem_get_ptr(l_new, key_size),
> + value, onallcpus);
> + hlist_add_head_rcu(&l_new->hash_node, head);
> + l_new = NULL;
> + }
> + ret = 0;
> +err:
> + raw_spin_unlock_irqrestore(&b->lock, flags);
> + if (l_new)
> + bpf_lru_push_free(&htab->lru, &l_new->lru_node);
> + return ret;
> +}
definitely tricky code, but all looks correct.
Acked-by: Alexei Starovoitov <ast@...nel.org>
Powered by blists - more mailing lists