lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191115044518.sqh3y3bwtjfp5zex@ast-mbp.dhcp.thefacebook.com>
Date:   Thu, 14 Nov 2019 20:45:19 -0800
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Andrii Nakryiko <andriin@...com>
Cc:     bpf@...r.kernel.org, netdev@...r.kernel.org, ast@...com,
        daniel@...earbox.net, andrii.nakryiko@...il.com,
        kernel-team@...com, Johannes Weiner <hannes@...xchg.org>,
        Rik van Riel <riel@...riel.com>
Subject: Re: [PATCH v4 bpf-next 2/4] bpf: add mmap() support for
 BPF_MAP_TYPE_ARRAY

On Thu, Nov 14, 2019 at 08:02:23PM -0800, Andrii Nakryiko wrote:
> Add ability to memory-map contents of BPF array map. This is extremely useful
> for working with BPF global data from userspace programs. It allows to avoid
> typical bpf_map_{lookup,update}_elem operations, improving both performance
> and usability.
> 
> There had to be special considerations for map freezing, to avoid having
> writable memory view into a frozen map. To solve this issue, map freezing and
> mmap-ing is happening under mutex now:
>   - if map is already frozen, no writable mapping is allowed;
>   - if map has writable memory mappings active (accounted in map->writecnt),
>     map freezing will keep failing with -EBUSY;
>   - once number of writable memory mappings drops to zero, map freezing can be
>     performed again.
> 
> Only non-per-CPU plain arrays are supported right now. Maps with spinlocks
> can't be memory mapped either.
> 
> For BPF_F_MMAPABLE array, memory allocation has to be done through vmalloc()
> to be mmap()'able. We also need to make sure that array data memory is
> page-sized and page-aligned, so we over-allocate memory in such a way that
> struct bpf_array is at the end of a single page of memory with array->value
> being aligned with the start of the second page. On deallocation we need to
> accomodate this memory arrangement to free vmalloc()'ed memory correctly.
> 
> One important consideration regarding how memory-mapping subsystem functions.
> Memory-mapping subsystem provides few optional callbacks, among them open()
> and close().  close() is called for each memory region that is unmapped, so
> that users can decrease their reference counters and free up resources, if
> necessary. open() is *almost* symmetrical: it's called for each memory region
> that is being mapped, **except** the very first one. So bpf_map_mmap does
> initial refcnt bump, while open() will do any extra ones after that. Thus
> number of close() calls is equal to number of open() calls plus one more.
> 
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Rik van Riel <riel@...riel.com>
> Acked-by: Song Liu <songliubraving@...com>
> Acked-by: John Fastabend <john.fastabend@...il.com>
> Signed-off-by: Andrii Nakryiko <andriin@...com>
> ---
>  include/linux/bpf.h            | 11 ++--
>  include/linux/vmalloc.h        |  1 +
>  include/uapi/linux/bpf.h       |  3 ++
>  kernel/bpf/arraymap.c          | 59 +++++++++++++++++---
>  kernel/bpf/syscall.c           | 99 ++++++++++++++++++++++++++++++++--
>  mm/vmalloc.c                   | 20 +++++++
>  tools/include/uapi/linux/bpf.h |  3 ++
>  7 files changed, 184 insertions(+), 12 deletions(-)
> 
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 6fbe599fb977..8021fce98868 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -12,6 +12,7 @@
>  #include <linux/err.h>
>  #include <linux/rbtree_latch.h>
>  #include <linux/numa.h>
> +#include <linux/mm_types.h>
>  #include <linux/wait.h>
>  #include <linux/u64_stats_sync.h>
>  
> @@ -66,6 +67,7 @@ struct bpf_map_ops {
>  				     u64 *imm, u32 off);
>  	int (*map_direct_value_meta)(const struct bpf_map *map,
>  				     u64 imm, u32 *off);
> +	int (*map_mmap)(struct bpf_map *map, struct vm_area_struct *vma);
>  };
>  
>  struct bpf_map_memory {
> @@ -94,9 +96,10 @@ struct bpf_map {
>  	u32 btf_value_type_id;
>  	struct btf *btf;
>  	struct bpf_map_memory memory;
> +	char name[BPF_OBJ_NAME_LEN];
>  	bool unpriv_array;
> -	bool frozen; /* write-once */
> -	/* 48 bytes hole */
> +	bool frozen; /* write-once; write-protected by freeze_mutex */
> +	/* 22 bytes hole */
>  
>  	/* The 3rd and 4th cacheline with misc members to avoid false sharing
>  	 * particularly with refcounting.
> @@ -104,7 +107,8 @@ struct bpf_map {
>  	atomic64_t refcnt ____cacheline_aligned;
>  	atomic64_t usercnt;
>  	struct work_struct work;
> -	char name[BPF_OBJ_NAME_LEN];
> +	struct mutex freeze_mutex;
> +	u64 writecnt; /* writable mmap cnt; protected by freeze_mutex */
>  };

Can the mutex be moved into bpf_array instead of being in bpf_map that is
shared across all map types?
If so then can you reuse the mutex that Daniel is adding in his patch 6/8
of tail_call series? Not sure what would the right name for such mutex.
It will be used for your map_freeze logic and for Daniel's text_poke.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ