[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzatLFGpht-CiSmOfSjBY_nATZsgnWhLnUFuDgvMi4yXLw@mail.gmail.com>
Date: Thu, 29 Oct 2020 16:12:20 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: KP Singh <kpsingh@...omium.org>
Cc: open list <linux-kernel@...r.kernel.org>,
bpf <bpf@...r.kernel.org>, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Martin KaFai Lau <kafai@...com>,
Paul Turner <pjt@...gle.com>, Jann Horn <jannh@...gle.com>,
Hao Luo <haoluo@...gle.com>
Subject: Re: [PATCH bpf-next 1/5] bpf: Implement task local storage
On Wed, Oct 28, 2020 at 9:17 AM KP Singh <kpsingh@...omium.org> wrote:
>
> From: KP Singh <kpsingh@...gle.com>
>
> Similar to bpf_local_storage for sockets and inodes add local storage
> for task_struct.
>
> The life-cycle of storage is managed with the life-cycle of the
> task_struct. i.e. the storage is destroyed along with the owning task
> with a callback to the bpf_task_storage_free from the task_free LSM
> hook.
>
> The BPF LSM allocates an __rcu pointer to the bpf_local_storage in
> the security blob which are now stackable and can co-exist with other
> LSMs.
>
> The userspace map operations can be done by using a pid fd as a key
> passed to the lookup, update and delete operations.
>
> Signed-off-by: KP Singh <kpsingh@...gle.com>
> ---
Please also double-check all three of get_pid_task() uses, you need to
put_task_struct() in all cases.
> include/linux/bpf_lsm.h | 23 ++
> include/linux/bpf_types.h | 1 +
> include/uapi/linux/bpf.h | 39 +++
> kernel/bpf/Makefile | 1 +
> kernel/bpf/bpf_lsm.c | 4 +
> kernel/bpf/bpf_task_storage.c | 327 ++++++++++++++++++
> kernel/bpf/syscall.c | 3 +-
> kernel/bpf/verifier.c | 10 +
> security/bpf/hooks.c | 2 +
> .../bpf/bpftool/Documentation/bpftool-map.rst | 3 +-
> tools/bpf/bpftool/bash-completion/bpftool | 2 +-
> tools/bpf/bpftool/map.c | 4 +-
> tools/include/uapi/linux/bpf.h | 39 +++
> tools/lib/bpf/libbpf_probes.c | 2 +
> 14 files changed, 456 insertions(+), 4 deletions(-)
> create mode 100644 kernel/bpf/bpf_task_storage.c
Please split out bpftool, bpftool documentation, and libbpf changes
into their respective patches.
[...]
> + *
> + * int bpf_task_storage_delete(struct bpf_map *map, void *task)
please use long for return type, as all other helpers (except
bpf_inode_storage_delete, which would be nice to fix as well) do.
> + * Description
> + * Delete a bpf_local_storage from a *task*.
> + * Return
> + * 0 on success.
> + *
> + * **-ENOENT** if the bpf_local_storage cannot be found.
> */
[...]
> +
> +void bpf_task_storage_free(struct task_struct *task)
> +{
> + struct bpf_local_storage_elem *selem;
> + struct bpf_local_storage *local_storage;
> + bool free_task_storage = false;
> + struct bpf_storage_blob *bsb;
> + struct hlist_node *n;
> +
> + bsb = bpf_task(task);
> + if (!bsb)
> + return;
> +
> + rcu_read_lock();
> +
> + local_storage = rcu_dereference(bsb->storage);
> + if (!local_storage) {
> + rcu_read_unlock();
> + return;
> + }
> +
> + /* Netiher the bpf_prog nor the bpf-map's syscall
typo: Neither
> + * could be modifying the local_storage->list now.
> + * Thus, no elem can be added-to or deleted-from the
> + * local_storage->list by the bpf_prog or by the bpf-map's syscall.
> + *
> + * It is racing with bpf_local_storage_map_free() alone
> + * when unlinking elem from the local_storage->list and
> + * the map's bucket->list.
> + */
> + raw_spin_lock_bh(&local_storage->lock);
> + hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
> + /* Always unlink from map before unlinking from
> + * local_storage.
> + */
> + bpf_selem_unlink_map(selem);
> + free_task_storage = bpf_selem_unlink_storage_nolock(
> + local_storage, selem, false);
this will override the previous value of free_task_storage. Did you
intend to do || here?
> + }
> + raw_spin_unlock_bh(&local_storage->lock);
> + rcu_read_unlock();
> +
> + /* free_task_storage should always be true as long as
> + * local_storage->list was non-empty.
> + */
> + if (free_task_storage)
> + kfree_rcu(local_storage, rcu);
> +}
> +
[...]
Powered by blists - more mailing lists