[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <94BD3FAC-CA98-4448-B467-3FC7307174F9@fb.com>
Date: Fri, 8 Nov 2019 06:39:44 +0000
From: Song Liu <songliubraving@...com>
To: Andrii Nakryiko <andriin@...com>
CC: bpf <bpf@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Alexei Starovoitov <ast@...com>,
"daniel@...earbox.net" <daniel@...earbox.net>,
"andrii.nakryiko@...il.com" <andrii.nakryiko@...il.com>,
Kernel Team <Kernel-team@...com>,
Rik van Riel <riel@...riel.com>,
Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH bpf-next 1/3] bpf: add mmap() support for
BPF_MAP_TYPE_ARRAY
> On Nov 7, 2019, at 8:20 PM, Andrii Nakryiko <andriin@...com> wrote:
>
> Add ability to memory-map contents of BPF array map. This is extremely useful
> for working with BPF global data from userspace programs. It allows to avoid
> typical bpf_map_{lookup,update}_elem operations, improving both performance
> and usability.
>
> There had to be special considerations for map freezing, to avoid having
> writable memory view into a frozen map. To solve this issue, map freezing and
> mmap-ing is happening under mutex now:
> - if map is already frozen, no writable mapping is allowed;
> - if map has writable memory mappings active (accounted in map->writecnt),
> map freezing will keep failing with -EBUSY;
> - once number of writable memory mappings drops to zero, map freezing can be
> performed again.
>
> Only non-per-CPU arrays are supported right now. Maps with spinlocks can't be
> memory mapped either.
>
> Cc: Rik van Riel <riel@...riel.com>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Signed-off-by: Andrii Nakryiko <andriin@...com>
Acked-by: Song Liu <songliubraving@...com>
With one nit below.
[...]
> - if (percpu)
> + data_size = 0;
> + if (percpu) {
> array_size += (u64) max_entries * sizeof(void *);
> - else
> - array_size += (u64) max_entries * elem_size;
> + } else {
> + if (attr->map_flags & BPF_F_MMAPABLE) {
> + data_size = (u64) max_entries * elem_size;
> + data_size = round_up(data_size, PAGE_SIZE);
> + } else {
> + array_size += (u64) max_entries * elem_size;
> + }
> + }
>
> /* make sure there is no u32 overflow later in round_up() */
> - cost = array_size;
> + cost = array_size + data_size;
This is a little confusing. Maybe we can do
data_size = (u64) max_entries * (per_cpu ? sizeof(void *) : elem_size;
if (attr->map_flags & BPF_F_MMAPABLE)
data_size = round_up(data_size, PAGE_SIZE);
cost = array_size + data_size;
So we use data_size in all cases.
Maybe also rename array_size.
> if (percpu)
> cost += (u64)attr->max_entries * elem_size * num_possible_cpus();
And maybe we can also include this in data_size.
[...]
Powered by blists - more mailing lists