lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sun, 29 Jul 2018 10:16:21 -0700
From:   Yonghong Song <yhs@...com>
To:     Daniel Borkmann <daniel@...earbox.net>, <ast@...com>,
        <netdev@...r.kernel.org>
CC:     <kernel-team@...com>
Subject: Re: [PATCH bpf] tools/bpftool: fix a percpu_array map dump problem



On 7/28/18 12:14 PM, Daniel Borkmann wrote:
> On 07/28/2018 01:11 AM, Yonghong Song wrote:
>> I hit the following problem when I tried to use bpftool
>> to dump a percpu array.
>>
>>    $ sudo ./bpftool map show
>>    61: percpu_array  name stub  flags 0x0
>> 	  key 4B  value 4B  max_entries 1  memlock 4096B
>>    ...
>>    $ sudo ./bpftool map dump id 61
>>    bpftool: malloc.c:2406: sysmalloc: Assertion
>>    `(old_top == initial_top (av) && old_size == 0) || \
>>     ((unsigned long) (old_size) >= MINSIZE && \
>>     prev_inuse (old_top) && \
>>     ((unsigned long) old_end & (pagesize - 1)) == 0)'
>>    failed.
>>    Aborted
>>
>> Further debugging revealed that this is due to
>> miscommunication between bpftool and kernel.
>> For example, for the above percpu_array with value size of 4B.
>> The map info returned to user space has value size of 4B.
>>
>> In bpftool, the values array for lookup is allocated like:
>>     info->value_size * get_possible_cpus() = 4 * get_possible_cpus()
>> In kernel (kernel/bpf/syscall.c), the values array size is
>> rounded up to multiple of 8.
>>     round_up(map->value_size, 8) * num_possible_cpus()
>>     = 8 * num_possible_cpus()
>> So when kernel copies the values to user buffer, the kernel will
>> overwrite beyond user buffer boundary.
>>
>> This patch fixed the issue by allocating and stepping through
>> percpu map value array properly in bpftool.
>>
>> Fixes: 71bb428fe2c19 ("tools: bpf: add bpftool")
>> Signed-off-by: Yonghong Song <yhs@...com>
>> ---
>>   tools/bpf/bpftool/map.c | 9 ++++++---
>>   1 file changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c
>> index 0ee3ba479d87..92bc55f98c4c 100644
>> --- a/tools/bpf/bpftool/map.c
>> +++ b/tools/bpf/bpftool/map.c
>> @@ -35,6 +35,7 @@
>>   #include <errno.h>
>>   #include <fcntl.h>
>>   #include <linux/err.h>
>> +#include <linux/kernel.h>
>>   #include <stdbool.h>
>>   #include <stdio.h>
>>   #include <stdlib.h>
>> @@ -91,7 +92,8 @@ static bool map_is_map_of_progs(__u32 type)
>>   static void *alloc_value(struct bpf_map_info *info)
>>   {
>>   	if (map_is_per_cpu(info->type))
>> -		return malloc(info->value_size * get_possible_cpus());
>> +		return malloc(round_up(info->value_size, 8) *
>> +			      get_possible_cpus());
>>   	else
>>   		return malloc(info->value_size);
>>   }
>> @@ -273,9 +275,10 @@ static void print_entry_json(struct bpf_map_info *info, unsigned char *key,
>>   			do_dump_btf(&d, info, key, value);
>>   		}
>>   	} else {
>> -		unsigned int i, n;
>> +		unsigned int i, n, step;
>>   
>>   		n = get_possible_cpus();
>> +		step = round_up(info->value_size, 8);
>>   
>>   		jsonw_name(json_wtr, "key");
>>   		print_hex_data_json(key, info->key_size);
>> @@ -288,7 +291,7 @@ static void print_entry_json(struct bpf_map_info *info, unsigned char *key,
>>   			jsonw_int_field(json_wtr, "cpu", i);
>>   
>>   			jsonw_name(json_wtr, "value");
>> -			print_hex_data_json(value + i * info->value_size,
>> +			print_hex_data_json(value + i * step,
>>   					    info->value_size);
>>   
>>   			jsonw_end_object(json_wtr);
> 
> Fix looks correct to me, but you would also need the same fix in print_entry_plain(), no?

Thanks for pointing this out. will submit v2 soon to fix the issue.

> 
> Thanks,
> Daniel
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ