[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zy0WBH45qgzIZrke@google.com>
Date: Thu, 7 Nov 2024 19:33:24 +0000
From: Carlos Llamas <cmllamas@...gle.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Arve Hjønnevåg <arve@...roid.com>,
Todd Kjos <tkjos@...roid.com>, Martijn Coenen <maco@...roid.com>,
Joel Fernandes <joel@...lfernandes.org>,
Christian Brauner <brauner@...nel.org>,
linux-kernel@...r.kernel.org, kernel-team@...roid.com,
Nhat Pham <nphamcs@...il.com>, Johannes Weiner <hannes@...xchg.org>,
Barry Song <v-songbaohua@...o.com>, Hillf Danton <hdanton@...a.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Subject: Re: [PATCH v2 8/8] binder: use per-vma lock in page installation
On Thu, Nov 07, 2024 at 10:52:30AM -0800, Suren Baghdasaryan wrote:
> On Thu, Nov 7, 2024 at 10:27 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
> >
> > On Thu, Nov 7, 2024 at 10:19 AM Carlos Llamas <cmllamas@...gle.com> wrote:
> > >
> > > On Thu, Nov 07, 2024 at 10:04:23AM -0800, Suren Baghdasaryan wrote:
> > > > On Thu, Nov 7, 2024 at 9:55 AM Carlos Llamas <cmllamas@...gle.com> wrote:
> > > > > On Thu, Nov 07, 2024 at 08:16:39AM -0800, Suren Baghdasaryan wrote:
> > > > > > On Wed, Nov 6, 2024 at 8:03 PM Carlos Llamas <cmllamas@...gle.com> wrote:
> > > > > > > +static int binder_page_insert(struct binder_alloc *alloc,
> > > > > > > + unsigned long addr,
> > > > > > > + struct page *page)
> > > > > > > +{
> > > > > > > + struct mm_struct *mm = alloc->mm;
> > > > > > > + struct vm_area_struct *vma;
> > > > > > > + int ret = -ESRCH;
> > > > > > > +
> > > > > > > + if (!mmget_not_zero(mm))
> > > > > > > + return -ESRCH;
> > > > > > > +
> > > > > > > + /* attempt per-vma lock first */
> > > > > > > + vma = lock_vma_under_rcu(mm, addr);
> > > > > > > + if (!vma)
> > > > > > > + goto lock_mmap;
> > > > > > > +
> > > > > > > + if (binder_alloc_is_mapped(alloc))
> > > > > >
> > > > > > I don't think you need this check here. lock_vma_under_rcu() ensures
> > > > > > that the VMA was not detached from the tree after locking the VMA, so
> > > > > > if you got a VMA it's in the tree and it can't be removed (because
> > > > > > it's locked). remove_vma()->vma_close()->vma->vm_ops->close() is
> > > > > > called after VMA gets detached from the tree and that won't happen
> > > > > > while VMA is locked. So, if lock_vma_under_rcu() returns a VMA,
> > > > > > binder_alloc_is_mapped() has to always return true. A WARN_ON() check
> > > > > > here to ensure that might be a better option.
> > > > >
> > > > > Yes we are guaranteed to have _a_ non-isolated vma. However, the check
> > > > > validates that it's the _expected_ vma. IIUC, our vma could have been
> > > > > unmapped (clearing alloc->mapped) and a _new_ unrelated vma could have
> > > > > gotten the same address space assigned?
> > > >
> > > > No, this should never happen. lock_vma_under_rcu() specifically checks
> > > > the address range *after* it locks the VMA:
> > > > https://elixir.bootlin.com/linux/v6.11.6/source/mm/memory.c#L6026
> > >
> > > The scenario I'm describing is the following:
> > >
> > > Proc A Proc B
> > > mmap(addr, binder_fd)
> > > binder_page_insert()
> > > mmget_not_zero()
> > > munmap(addr)
> > > alloc->mapped = false;
> > > [...]
> > > // mmap other vma but same addr
> > > mmap(addr, other_fd)
> > >
> > > vma = lock_vma_under_rcu()
> > >
> > > Isn't there a chance for the vma that Proc A receives is an unrelated
> > > vma that was placed in the same address range?
> >
> > Ah, I see now. The VMA is a valid one and at the address we specified
> > but it does not belong to the binder. Yes, then you do need this
> > check.
>
> Is this scenario possible?:
>
> Proc A Proc B
> mmap(addr, binder_fd)
> binder_page_insert()
> mmget_not_zero()
> munmap(addr)
> alloc->mapped = false;
> [...]
> // mmap other vma but same addr
> mmap(addr, other_fd)
> mmap(other_addr, binder_fd)
> vma = lock_vma_under_rcu(addr)
>
> If so, I think your binder_alloc_is_mapped() check will return true
> but the binder area is mapped at a different other_addr. To avoid that
> I think you can check that "addr" still belongs to [alloc->vm_start,
> alloc->buffer_size] after you obtained and locked the VMA.
Wait, I thought that vm_ops->close() was called with the mmap_lock in
exclusive mode. This is where binder clears the alloc->mapped. If this
is not the case (was it ever?), then I'd definitely need to fix this.
I'll have a closer look.
Thanks,
Carlos Llamas
Powered by blists - more mailing lists