[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mb61ple677vuv.fsf@gmail.com>
Date: Sun, 24 Mar 2024 10:44:08 +0000
From: Puranjay Mohan <puranjay12@...il.com>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>, Daniel Borkmann
<daniel@...earbox.net>
Cc: "David S. Miller" <davem@...emloft.net>, David Ahern
<dsahern@...nel.org>, Alexei Starovoitov <ast@...nel.org>, Andrii Nakryiko
<andrii@...nel.org>, Martin KaFai Lau <martin.lau@...ux.dev>, Eduard
Zingerman <eddyz87@...il.com>, Song Liu <song@...nel.org>, Yonghong Song
<yonghong.song@...ux.dev>, John Fastabend <john.fastabend@...il.com>, KP
Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...gle.com>, Hao Luo
<haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>, Thomas Gleixner
<tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov
<bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>, X86 ML
<x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>, Jean-Philippe Brucker
<jean-philippe@...aro.org>, Network Development <netdev@...r.kernel.org>,
bpf <bpf@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>, Ilya
Leoshkevich <iii@...ux.ibm.com>
Subject: Re: [PATCH bpf v4] bpf: verifier: prevent userspace memory access
Alexei Starovoitov <alexei.starovoitov@...il.com> writes:
> On Fri, Mar 22, 2024 at 9:28 AM Daniel Borkmann <daniel@...earbox.net> wrote:
>>
>> On 3/22/24 4:05 PM, Puranjay Mohan wrote:
>> [...]
>> >>> + /* Make it impossible to de-reference a userspace address */
>> >>> + if (BPF_CLASS(insn->code) == BPF_LDX &&
>> >>> + (BPF_MODE(insn->code) == BPF_PROBE_MEM ||
>> >>> + BPF_MODE(insn->code) == BPF_PROBE_MEMSX)) {
>> >>> + struct bpf_insn *patch = &insn_buf[0];
>> >>> + u64 uaddress_limit = bpf_arch_uaddress_limit();
>> >>> +
>> >>> + if (!uaddress_limit)
>> >>> + goto next_insn;
>> >>> +
>> >>> + *patch++ = BPF_MOV64_REG(BPF_REG_AX, insn->src_reg);
>> >>> + if (insn->off)
>> >>> + *patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_AX, insn->off);
>> >>> + *patch++ = BPF_ALU64_IMM(BPF_RSH, BPF_REG_AX, 32);
>> >>> + *patch++ = BPF_JMP_IMM(BPF_JLE, BPF_REG_AX, uaddress_limit >> 32, 2);
>> >>> + *patch++ = *insn;
>> >>> + *patch++ = BPF_JMP_IMM(BPF_JA, 0, 0, 1);
>> >>> + *patch++ = BPF_MOV64_IMM(insn->dst_reg, 0);
>> >>
>> >> But how does this address other cases where we could fault e.g. non-canonical,
>> >> vsyscall page, etc? Technically, we would have to call to copy_from_kernel_nofault_allowed()
>> >> to really address all the cases aside from the overflow (good catch btw!) where kernel
>> >> turns into user address.
>> >
>> > So, we are trying to ~simulate a call to
>> > copy_from_kernel_nofault_allowed() here. If the address under
>> > consideration is below TASK_SIZE (TASK_SIZE + 4GB to be precise) then we
>> > skip that load because that address could be mapped by the user.
>> >
>> > If the address is above TASK_SIZE + 4GB, we allow the load and it could
>> > cause a fault if the address is invalid, non-canonical etc. Taking the
>> > fault is fine because JIT will add an exception table entry for
>> > for that load with BPF_PBOBE_MEM.
>>
>> Are you sure? I don't think the kernel handles non-canonical fixup.
>
> I believe it handles it fine otherwise our selftest bpf_testmod_return_ptr:
> case 4: return (void *)(1ull << 60); /* non-canonical and invalid */
> would have been crashing for the last 3 years,
> since we've been running it.
>
>> > The vsyscall page is special, this approach skips all loads from this
>> > page. I am not sure if that is acceptable.
>>
>> The bpf_probe_read_kernel() does handle it fine via copy_from_kernel_nofault().
>>
>> So there is tail risk that BPF_PROBE_* could trigger a crash.
>
> For this patch let's do
> return max(TASK_SIZE_MAX + PAGE_SIZE, VSYSCALL_ADDR)
> to cover both with one check?
I agree, will add this in the next version.
>> Other archs might
>> have other quirks, e.g. in case of loongarch it says highest bit set means kernel
>> space.
>
> let's tackle loongarch with whatever quirks it has separately.
Yes, having the current patch will not break loongarch, it will help it
skip some userspace addresses. We can later implement
bpf_arch_uaddress_limit() in loongarch JIT to handle its specific
quirks.
Powered by blists - more mailing lists