lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzbMN49GXu3B83=k=4vKpLts9Rk8xt50i_xzQL_Tht4m5g@mail.gmail.com>
Date: Thu, 12 Sep 2024 10:54:09 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Christian Brauner <brauner@...nel.org>
Cc: Suren Baghdasaryan <surenb@...gle.com>, Jann Horn <jannh@...gle.com>, 
	Liam Howlett <liam.howlett@...cle.com>, Andrii Nakryiko <andrii@...nel.org>, 
	linux-trace-kernel@...r.kernel.org, peterz@...radead.org, oleg@...hat.com, 
	rostedt@...dmis.org, mhiramat@...nel.org, bpf@...r.kernel.org, 
	linux-kernel@...r.kernel.org, jolsa@...nel.org, paulmck@...nel.org, 
	willy@...radead.org, akpm@...ux-foundation.org, linux-mm@...ck.org, 
	mjguzik@...il.com, Miklos Szeredi <miklos@...redi.hu>, Amir Goldstein <amir73il@...il.com>, 
	linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH 2/2] uprobes: add speculative lockless VMA-to-inode-to-uprobe
 resolution

On Thu, Sep 12, 2024 at 4:17 AM Christian Brauner <brauner@...nel.org> wrote:
>
> On Tue, Sep 10, 2024 at 01:58:10PM GMT, Andrii Nakryiko wrote:
> > On Tue, Sep 10, 2024 at 9:32 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
> > >
> > > On Mon, Sep 9, 2024 at 2:29 PM Andrii Nakryiko
> > > <andrii.nakryiko@...il.com> wrote:
> > > >
> > > > On Mon, Sep 9, 2024 at 6:13 AM Jann Horn <jannh@...gle.com> wrote:
> > > > >
> > > > > On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko <andrii@...nel.org> wrote:
> > > > > > Given filp_cachep is already marked SLAB_TYPESAFE_BY_RCU, we can safely
> > > > > > access vma->vm_file->f_inode field locklessly under just rcu_read_lock()
> > > > >
> > > > > No, not every file is SLAB_TYPESAFE_BY_RCU - see for example
> > > > > ovl_mmap(), which uses backing_file_mmap(), which does
> > > > > vma_set_file(vma, file) where "file" comes from ovl_mmap()'s
> > > > > "realfile", which comes from file->private_data, which is set in
> > > > > ovl_open() to the return value of ovl_open_realfile(), which comes
> > > > > from backing_file_open(), which allocates a file with
> > > > > alloc_empty_backing_file(), which uses a normal kzalloc() without any
> > > > > RCU stuff, with this comment:
> > > > >
> > > > >  * This is only for kernel internal use, and the allocate file must not be
> > > > >  * installed into file tables or such.
> > > > >
> > > > > And when a backing_file is freed, you can see on the path
> > > > > __fput() -> file_free()
> > > > > that files with FMODE_BACKING are directly freed with kfree(), no RCU delay.
> > > >
> > > > Good catch on FMODE_BACKING, I didn't realize there is this exception, thanks!
> > > >
> > > > I think the way forward would be to detect that the backing file is in
> > > > FMODE_BACKING and fall back to mmap_lock-protected code path.
> > > >
> > > > I guess I have the question to Liam and Suren, do you think it would
> > > > be ok to add another bool after `bool detached` in struct
> > > > vm_area_struct (guarded by CONFIG_PER_VMA_LOCK), or should we try to
> > > > add an extra bit into vm_flags_t? The latter would work without
> > > > CONFIG_PER_VMA_LOCK, but I don't know what's acceptable with mm folks.
> > > >
> > > > This flag can be set in vma_set_file() when swapping backing file and
> > > > wherever else vma->vm_file might be set/updated (I need to audit the
> > > > code).
> > >
> > > I understand that this would work but I'm not very eager to leak
> > > vm_file attributes like FMODE_BACKING into vm_area_struct.
> > > Instead maybe that exception can be avoided? Treating all vm_files
> >
> > I agree, that would be best, of course. It seems like [1] was an
> > optimization to avoid kfree_rcu() calls, not sure how big of a deal it
> > is to undo that, given we do have a use case that calls for it now.
> > Let's see what Christian thinks.
>
> Do you just mean?
>
> diff --git a/fs/file_table.c b/fs/file_table.c
> index 7ce4d5dac080..03e58b28e539 100644
> --- a/fs/file_table.c
> +++ b/fs/file_table.c
> @@ -68,7 +68,7 @@ static inline void file_free(struct file *f)
>         put_cred(f->f_cred);
>         if (unlikely(f->f_mode & FMODE_BACKING)) {
>                 path_put(backing_file_user_path(f));
> -               kfree(backing_file(f));
> +               kfree_rcu(backing_file(f));
>         } else {
>                 kmem_cache_free(filp_cachep, f);
>         }
>
> Then the only thing you can do with FMODE_BACKING is to skip it. I think
> that should be fine since backing files right now are only used by
> overlayfs and I don't think the kfree_rcu() will be a performance issue.

Yes, something along those lines. Ok, great, if it's ok to add back
kfree_rcu(), then I think that resolves the main problem I was running
into. I'll incorporate adding back RCU-delated freeing as a separate
patch into the future patch set, thanks!

>
> >
> > > equally as RCU-safe would be a much simpler solution. I see that this
> > > exception was introduced in [1] and I don't know if this was done for
> > > performance reasons or something else. Christian, CCing you here to
> > > please clarify.
> > >
> > > [1] https://lore.kernel.org/all/20231005-sakralbau-wappnen-f5c31755ed70@brauner/
> > >
> > > >
> > > > >
> > > > > So the RCU-ness of "struct file" is an implementation detail of the
> > > > > VFS, and you can't rely on it for ->vm_file unless you get the VFS to
> > > > > change how backing file lifetimes work, which might slow down some
> > > > > other workload, or you find a way to figure out whether you're dealing
> > > > > with a backing file without actually accessing the file.
> > > > >
> > > > > > +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr)
> > > > > > +{
> > > > > > +       const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE;
> > > > > > +       struct mm_struct *mm = current->mm;
> > > > > > +       struct uprobe *uprobe;
> > > > > > +       struct vm_area_struct *vma;
> > > > > > +       struct file *vm_file;
> > > > > > +       struct inode *vm_inode;
> > > > > > +       unsigned long vm_pgoff, vm_start;
> > > > > > +       int seq;
> > > > > > +       loff_t offset;
> > > > > > +
> > > > > > +       if (!mmap_lock_speculation_start(mm, &seq))
> > > > > > +               return NULL;
> > > > > > +
> > > > > > +       rcu_read_lock();
> > > > > > +
> > > > > > +       vma = vma_lookup(mm, bp_vaddr);
> > > > > > +       if (!vma)
> > > > > > +               goto bail;
> > > > > > +
> > > > > > +       vm_file = data_race(vma->vm_file);
> > > > >
> > > > > A plain "data_race()" says "I'm fine with this load tearing", but
> > > > > you're relying on this load not tearing (since you access the vm_file
> > > > > pointer below).
> > > > > You're also relying on the "struct file" that vma->vm_file points to
> > > > > being populated at this point, which means you need CONSUME semantics
> > > > > here, which READ_ONCE() will give you, and something like RELEASE
> > > > > semantics on any pairing store that populates vma->vm_file, which
> > > > > means they'd all have to become something like smp_store_release()).
> > > >
> > > > vma->vm_file should be set in VMA before it is installed and is never
> > > > modified afterwards, isn't that the case? So maybe no extra barrier
> > > > are needed and READ_ONCE() would be enough.
> > > >
> > > > >
> > > > > You might want to instead add another recheck of the sequence count
> > > > > (which would involve at least a read memory barrier after the
> > > > > preceding patch is fixed) after loading the ->vm_file pointer to
> > > > > ensure that no one was concurrently changing the ->vm_file pointer
> > > > > before you do memory accesses through it.
> > > > >
> > > > > > +       if (!vm_file || (vma->vm_flags & flags) != VM_MAYEXEC)
> > > > > > +               goto bail;
> > > > >
> > > > > missing data_race() annotation on the vma->vm_flags access
> > > >
> > > > ack
> > > >
> > > > >
> > > > > > +       vm_inode = data_race(vm_file->f_inode);
> > > > >
> > > > > As noted above, this doesn't work because you can't rely on having RCU
> > > > > lifetime for the file. One *very* ugly hack you could do, if you think
> > > > > this code is so performance-sensitive that you're willing to do fairly
> > > > > atrocious things here, would be to do a "yes I am intentionally doing
> > > > > a UAF read and I know the address might not even be mapped at this
> > > > > point, it's fine, trust me" pattern, where you use
> > > > > copy_from_kernel_nofault(), kind of like in prepend_copy() in
> > > > > fs/d_path.c, and then immediately recheck the sequence count before
> > > > > doing *anything* with this vm_inode pointer you just loaded.
> > > > >
> > > > >
> > > >
> > > > yeah, let's leave it as a very unfortunate plan B and try to solve it
> > > > a bit cleaner.
> > > >
> > > >
> > > > >
> > > > > > +       vm_pgoff = data_race(vma->vm_pgoff);
> > > > > > +       vm_start = data_race(vma->vm_start);
> > > > > > +
> > > > > > +       offset = (loff_t)(vm_pgoff << PAGE_SHIFT) + (bp_vaddr - vm_start);
> > > > > > +       uprobe = find_uprobe_rcu(vm_inode, offset);
> > > > > > +       if (!uprobe)
> > > > > > +               goto bail;
> > > > > > +
> > > > > > +       /* now double check that nothing about MM changed */
> > > > > > +       if (!mmap_lock_speculation_end(mm, seq))
> > > > > > +               goto bail;
> > > > > > +
> > > > > > +       rcu_read_unlock();
> > > > > > +
> > > > > > +       /* happy case, we speculated successfully */
> > > > > > +       return uprobe;
> > > > > > +bail:
> > > > > > +       rcu_read_unlock();
> > > > > > +       return NULL;
> > > > > > +}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ