lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 13 Nov 2019 21:38:33 +0100
From:   Daniel Borkmann <daniel@...earbox.net>
To:     Andrii Nakryiko <andriin@...com>, bpf@...r.kernel.org,
        netdev@...r.kernel.org, ast@...com
Cc:     andrii.nakryiko@...il.com, kernel-team@...com,
        Rik van Riel <riel@...riel.com>,
        Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH v3 bpf-next 1/3] bpf: add mmap() support for
 BPF_MAP_TYPE_ARRAY

On 11/13/19 4:15 AM, Andrii Nakryiko wrote:
> Add ability to memory-map contents of BPF array map. This is extremely useful
> for working with BPF global data from userspace programs. It allows to avoid
> typical bpf_map_{lookup,update}_elem operations, improving both performance
> and usability.
> 
> There had to be special considerations for map freezing, to avoid having
> writable memory view into a frozen map. To solve this issue, map freezing and
> mmap-ing is happening under mutex now:
>    - if map is already frozen, no writable mapping is allowed;
>    - if map has writable memory mappings active (accounted in map->writecnt),
>      map freezing will keep failing with -EBUSY;
>    - once number of writable memory mappings drops to zero, map freezing can be
>      performed again.
> 
> Only non-per-CPU plain arrays are supported right now. Maps with spinlocks
> can't be memory mapped either.
> 
> For BPF_F_MMAPABLE array, memory allocation has to be done through vmalloc()
> to be mmap()'able. We also need to make sure that array data memory is
> page-sized and page-aligned, so we over-allocate memory in such a way that
> struct bpf_array is at the end of a single page of memory with array->value
> being aligned with the start of the second page. On deallocation we need to
> accomodate this memory arrangement to free vmalloc()'ed memory correctly.
> 
> Cc: Rik van Riel <riel@...riel.com>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Acked-by: Song Liu <songliubraving@...com>
> Signed-off-by: Andrii Nakryiko <andriin@...com>

Overall set looks good to me! One comment below:

[...]
> @@ -117,7 +131,20 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr)
>   		return ERR_PTR(ret);
>   
>   	/* allocate all map elements and zero-initialize them */
> -	array = bpf_map_area_alloc(array_size, numa_node);
> +	if (attr->map_flags & BPF_F_MMAPABLE) {
> +		void *data;
> +
> +		/* kmalloc'ed memory can't be mmap'ed, use explicit vmalloc */
> +		data = vzalloc_node(array_size, numa_node);
> +		if (!data) {
> +			bpf_map_charge_finish(&mem);
> +			return ERR_PTR(-ENOMEM);
> +		}
> +		array = data + round_up(sizeof(struct bpf_array), PAGE_SIZE)
> +			- offsetof(struct bpf_array, value);
> +	} else {
> +		array = bpf_map_area_alloc(array_size, numa_node);
> +	}

Can't we place/extend all this logic inside bpf_map_area_alloc() and
bpf_map_area_free() API instead of hard-coding it here?

Given this is a generic feature of which global data is just one consumer,
my concern is that this reintroduces similar issues that mentioned API was
trying to solve already meaning failing early instead of trying hard and
triggering OOM if the array is large.

Consolidating this into bpf_map_area_alloc()/bpf_map_area_free() would
make sure all the rest has same semantics.

>   	if (!array) {
>   		bpf_map_charge_finish(&mem);
>   		return ERR_PTR(-ENOMEM);
> @@ -365,7 +392,10 @@ static void array_map_free(struct bpf_map *map)
>   	if (array->map.map_type == BPF_MAP_TYPE_PERCPU_ARRAY)
>   		bpf_array_free_percpu(array);
>   
> -	bpf_map_area_free(array);
> +	if (array->map.map_flags & BPF_F_MMAPABLE)
> +		bpf_map_area_free((void *)round_down((long)array, PAGE_SIZE));
> +	else
> +		bpf_map_area_free(array);
>   }
>   
>   static void array_map_seq_show_elem(struct bpf_map *map, void *key,
[...]

Powered by blists - more mailing lists