lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 10 Jul 2018 14:13:42 +0200
From:   Daniel Borkmann <daniel@...earbox.net>
To:     Mark Rutland <mark.rutland@....com>
Cc:     alexei.starovoitov@...il.com, netdev@...r.kernel.org
Subject: Re: [PATCH bpf] bpf: fix ldx in ld_abs rewrite for large offsets

On 07/10/2018 12:14 PM, Mark Rutland wrote:
> On Tue, Jul 10, 2018 at 12:43:22AM +0200, Daniel Borkmann wrote:
>> Mark reported that syzkaller triggered a KASAN detected slab-out-of-bounds
>> bug in ___bpf_prog_run() with a BPF_LD | BPF_ABS word load at offset 0x8001.
>> After further investigation it became clear that the issue was the
>> BPF_LDX_MEM() which takes offset as an argument whereas it cannot encode
>> larger than S16_MAX offsets into it. For this synthetical case we need to
>> move the full address into tmp register instead and do the LDX without
>> immediate value.
>>
>> Fixes: e0cea7ce988c ("bpf: implement ld_abs/ld_ind in native bpf")
>> Reported-by: syzbot <syzkaller@...glegroups.com>
>> Reported-by: Mark Rutland <mark.rutland@....com>
>> Signed-off-by: Daniel Borkmann <daniel@...earbox.net>
>> ---
>>  net/core/filter.c | 16 +++++++++++++---
>>  1 file changed, 13 insertions(+), 3 deletions(-)
>>
>> diff --git a/net/core/filter.c b/net/core/filter.c
>> index 5fa66a3..a13f5b1 100644
>> --- a/net/core/filter.c
>> +++ b/net/core/filter.c
>> @@ -459,11 +459,21 @@ static bool convert_bpf_ld_abs(struct sock_filter *fp, struct bpf_insn **insnp)
>>  	     (!unaligned_ok && offset >= 0 &&
>>  	      offset + ip_align >= 0 &&
>>  	      offset + ip_align % size == 0))) {
>> +		bool ldx_off_ok = offset <= S16_MAX;
>> +
> 
> Given offset is a (signed) int, is it possible for that to be a negative
> value less than S16_MIN? ... or is that ruled out elsewhere?

This branch here handles only positive offset. offset can be negative,
but in that case these insns won't be emitted and handling will be done
in 'slow path' via bpf_skb_load_helper_*().

> Thanks,
> Mark.
> 
>>  		*insn++ = BPF_MOV64_REG(BPF_REG_TMP, BPF_REG_H);
>>  		*insn++ = BPF_ALU64_IMM(BPF_SUB, BPF_REG_TMP, offset);
>> -		*insn++ = BPF_JMP_IMM(BPF_JSLT, BPF_REG_TMP, size, 2 + endian);
>> -		*insn++ = BPF_LDX_MEM(BPF_SIZE(fp->code), BPF_REG_A, BPF_REG_D,
>> -				      offset);
>> +		*insn++ = BPF_JMP_IMM(BPF_JSLT, BPF_REG_TMP,
>> +				      size, 2 + endian + (!ldx_off_ok * 2));
>> +		if (ldx_off_ok) {
>> +			*insn++ = BPF_LDX_MEM(BPF_SIZE(fp->code), BPF_REG_A,
>> +					      BPF_REG_D, offset);
>> +		} else {
>> +			*insn++ = BPF_MOV64_REG(BPF_REG_TMP, BPF_REG_D);
>> +			*insn++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_TMP, offset);
>> +			*insn++ = BPF_LDX_MEM(BPF_SIZE(fp->code), BPF_REG_A,
>> +					      BPF_REG_TMP, 0);
>> +		}
>>  		if (endian)
>>  			*insn++ = BPF_ENDIAN(BPF_FROM_BE, BPF_REG_A, size * 8);
>>  		*insn++ = BPF_JMP_A(8);
>> -- 
>> 2.9.5
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ