lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53008195-66f2-10b2-be8d-f5752ce19f93@polito.it>
Date:   Wed, 8 Aug 2018 21:55:56 -0500
From:   Mauricio Vasquez <mauricio.vasquez@...ito.it>
To:     Daniel Borkmann <daniel@...earbox.net>,
        Alexei Starovoitov <ast@...nel.org>
Cc:     netdev@...r.kernel.org
Subject: Re: [PATCH bpf-next 1/3] bpf: add bpf queue map



On 08/07/2018 08:52 AM, Daniel Borkmann wrote:
> On 08/06/2018 03:58 PM, Mauricio Vasquez B wrote:
>> Bpf queue implements a LIFO/FIFO data containers for ebpf programs.
>>
>> It allows to push an element to the queue by using the update operation
>> and to pop an element from the queue by using the lookup operation.
>>
>> A use case for this is to keep track of a pool of elements, like
>> network ports in a SNAT.
>>
>> Signed-off-by: Mauricio Vasquez B <mauricio.vasquez@...ito.it>
> [...]
>> +static int prealloc_init(struct bpf_queue *queue)
>> +{
>> +	u32 node_size = sizeof(struct queue_node) +
>> +			round_up(queue->map.value_size, 8);
>> +	u32 num_entries = queue->map.max_entries;
>> +	int err;
>> +
>> +	queue->nodes = bpf_map_area_alloc(node_size * num_entries,
>> +					  queue->map.numa_node);
> That doesn't work either. If you don't set numa node, then here in
> your case you'll always use numa node 0, which is unintentional.
> You need to get the node via bpf_map_attr_numa_node(attr) helper.
> Same issue in queue_map_update_elem().
>
This should work, map.numa_node is initialized using 
bpf_map_attr_numa_node() in bpf_map_init_from_attr()

The htab does exactly the same.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ