[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9ef6e3b5-c129-e5fb-b685-f73fdee68b53@fb.com>
Date: Tue, 14 Jan 2020 21:33:50 +0000
From: Yonghong Song <yhs@...com>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>,
Brian Vazquez <brianvv@...gle.com>
CC: Brian Vazquez <brianvv.kernel@...il.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
"David S . Miller" <davem@...emloft.net>,
Stanislav Fomichev <sdf@...gle.com>,
Petar Penkov <ppenkov@...gle.com>,
Willem de Bruijn <willemb@...gle.com>,
open list <linux-kernel@...r.kernel.org>,
Networking <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>
Subject: Re: [PATCH v4 bpf-next 7/9] libbpf: add libbpf support to batch ops
On 1/14/20 11:13 AM, Andrii Nakryiko wrote:
> On Tue, Jan 14, 2020 at 10:54 AM Brian Vazquez <brianvv@...gle.com> wrote:
>>
>> On Tue, Jan 14, 2020 at 10:36 AM Andrii Nakryiko
>> <andrii.nakryiko@...il.com> wrote:
>>>
>>> On Tue, Jan 14, 2020 at 8:46 AM Brian Vazquez <brianvv@...gle.com> wrote:
>>>>
>>>> From: Yonghong Song <yhs@...com>
>>>>
>>>> Added four libbpf API functions to support map batch operations:
>>>> . int bpf_map_delete_batch( ... )
>>>> . int bpf_map_lookup_batch( ... )
>>>> . int bpf_map_lookup_and_delete_batch( ... )
>>>> . int bpf_map_update_batch( ... )
>>>>
>>>> Signed-off-by: Yonghong Song <yhs@...com>
>>>> ---
>>>> tools/lib/bpf/bpf.c | 60 ++++++++++++++++++++++++++++++++++++++++
>>>> tools/lib/bpf/bpf.h | 22 +++++++++++++++
>>>> tools/lib/bpf/libbpf.map | 4 +++
>>>> 3 files changed, 86 insertions(+)
>>>>
>>>> diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
>>>> index 500afe478e94a..12ce8d275f7dc 100644
>>>> --- a/tools/lib/bpf/bpf.c
>>>> +++ b/tools/lib/bpf/bpf.c
>>>> @@ -452,6 +452,66 @@ int bpf_map_freeze(int fd)
>>>> return sys_bpf(BPF_MAP_FREEZE, &attr, sizeof(attr));
>>>> }
>>>>
>>>> +static int bpf_map_batch_common(int cmd, int fd, void *in_batch,
>>>> + void *out_batch, void *keys, void *values,
>>>> + __u32 *count,
>>>> + const struct bpf_map_batch_opts *opts)
>>>> +{
>>>> + union bpf_attr attr = {};
>>>> + int ret;
>>>> +
>>>> + if (!OPTS_VALID(opts, bpf_map_batch_opts))
>>>> + return -EINVAL;
>>>> +
>>>> + memset(&attr, 0, sizeof(attr));
>>>> + attr.batch.map_fd = fd;
>>>> + attr.batch.in_batch = ptr_to_u64(in_batch);
>>>> + attr.batch.out_batch = ptr_to_u64(out_batch);
>>>> + attr.batch.keys = ptr_to_u64(keys);
>>>> + attr.batch.values = ptr_to_u64(values);
>>>> + if (count)
>>>> + attr.batch.count = *count;
>>>> + attr.batch.elem_flags = OPTS_GET(opts, elem_flags, 0);
>>>> + attr.batch.flags = OPTS_GET(opts, flags, 0);
>>>> +
>>>> + ret = sys_bpf(cmd, &attr, sizeof(attr));
>>>> + if (count)
>>>> + *count = attr.batch.count;
>>>
>>> what if syscall failed, do you still want to assign *count then?
>>
>> Hi Andrii, thanks for taking a look.
>>
>> attr.batch.count should report the number of entries correctly
>> processed before finding and error, an example could be when you
>> provided a buffer for 3 entries and the map only has 1, ret is going
>> to be -ENOENT meaning that you traversed the map and you still want to
>> assign *count.
>
> ah, ok, tricky semantics :) if syscall failed before kernel got to
> updating count, I'm guessing it is guaranteed to preserve old value?
>
>>
>> That being said, the condition 'if (count)' is wrong and I think it
>> should be removed.
>
> So count is mandatory, right? In that case both `if (count)` checks are wrong.
A little bit history here. Some of early iterations may have operations
like:
delete the whole hash table
delete the hash table starting from a key to the end.
etc.
In such cases, user may not pass 'count' to kernel.
Now we do not support such scenarios and all supported cases
in this patch set requires 'count', so yes, we can make `count'
mandatory.
>
>>
>>>
>>>> +
>>>> + return ret;
>>>> +}
>>>> +
>>>
>>> [...]
Powered by blists - more mailing lists