lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 10 Apr 2020 10:51:20 +0200
From:   Jann Horn <jannh@...gle.com>
To:     Andrii Nakryiko <andriin@...com>
Cc:     bpf <bpf@...r.kernel.org>,
        Network Development <netdev@...r.kernel.org>,
        Alexei Starovoitov <ast@...com>,
        Daniel Borkmann <daniel@...earbox.net>,
        andrii.nakryiko@...il.com, Kernel Team <kernel-team@...com>
Subject: Re: [PATCH bpf 1/2] bpf: prevent re-mmap()'ing BPF map as writable
 for initially r/o mapping

On Fri, Apr 10, 2020 at 2:04 AM Andrii Nakryiko <andriin@...com> wrote:
> VM_MAYWRITE flag during initial memory mapping determines if already mmap()'ed
> pages can be later remapped as writable ones through mprotect() call. To
> prevent user application to rewrite contents of memory-mapped as read-only and
> subsequently frozen BPF map, remove VM_MAYWRITE flag completely on initially
> read-only mapping.
>
> Alternatively, we could treat any memory-mapping on unfrozen map as writable
> and bump writecnt instead. But there is little legitimate reason to map
> BPF map as read-only and then re-mmap() it as writable through mprotect(),
> instead of just mmap()'ing it as read/write from the very beginning.
>
> Fixes: fc9702273e2e ("bpf: Add mmap() support for BPF_MAP_TYPE_ARRAY")
> Reported-by: Jann Horn <jannh@...gle.com>
> Signed-off-by: Andrii Nakryiko <andriin@...com>
> ---
>  kernel/bpf/syscall.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 64783da34202..f7f6db50a085 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -635,6 +635,10 @@ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
>         /* set default open/close callbacks */
>         vma->vm_ops = &bpf_map_default_vmops;
>         vma->vm_private_data = map;
> +       vma->vm_flags &= ~VM_MAYEXEC;
> +       if (!(vma->vm_flags & VM_WRITE))
> +               /* disallow re-mapping with PROT_WRITE */
> +               vma->vm_flags &= ~VM_MAYWRITE;

The .open and .close handlers for the VMA are also wrong:

/* called for any extra memory-mapped regions (except initial) */
static void bpf_map_mmap_open(struct vm_area_struct *vma)
{
        struct bpf_map *map = vma->vm_file->private_data;

        bpf_map_inc_with_uref(map);

        if (vma->vm_flags & VM_WRITE) {
                mutex_lock(&map->freeze_mutex);
                map->writecnt++;
                mutex_unlock(&map->freeze_mutex);
        }
}

/* called for all unmapped memory region (including initial) */
static void bpf_map_mmap_close(struct vm_area_struct *vma)
{
        struct bpf_map *map = vma->vm_file->private_data;

        if (vma->vm_flags & VM_WRITE) {
                mutex_lock(&map->freeze_mutex);
                map->writecnt--;
                mutex_unlock(&map->freeze_mutex);
        }

        bpf_map_put_with_uref(map);
}

static const struct vm_operations_struct bpf_map_default_vmops = {
        .open           = bpf_map_mmap_open,
        .close          = bpf_map_mmap_close,
};

You can use mprotect() to flip VM_WRITE off while a VMA exists, and
then the writecnt won't go down when you close it. Or you could even
get the writecnt to go negative if you map as writable, then
mprotect() to readonly, then split the VMA a few times, mprotect the
split VMAs to writable, and then unmap them.

I think you'll want to also check for MAYWRITE here.

Also, the bpf_map_inc_with_uref/bpf_map_put_with_uref here look
superfluous - the VMA holds a reference to the file, and the file
holds a reference to the map.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ