[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y8fl8lqS4QHZO1gV@dhcp22.suse.cz>
Date: Wed, 18 Jan 2023 13:28:34 +0100
From: Michal Hocko <mhocko@...e.com>
To: Jann Horn <jannh@...gle.com>
Cc: Suren Baghdasaryan <surenb@...gle.com>, peterz@...radead.org,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
akpm@...ux-foundation.org, michel@...pinasse.org,
jglisse@...gle.com, vbabka@...e.cz, hannes@...xchg.org,
mgorman@...hsingularity.net, dave@...olabs.net,
willy@...radead.org, liam.howlett@...cle.com,
ldufour@...ux.ibm.com, laurent.dufour@...ibm.com,
paulmck@...nel.org, luto@...nel.org, songliubraving@...com,
peterx@...hat.com, david@...hat.com, dhowells@...hat.com,
hughd@...gle.com, bigeasy@...utronix.de, kent.overstreet@...ux.dev,
punit.agrawal@...edance.com, lstoakes@...il.com,
peterjung1337@...il.com, rientjes@...gle.com,
axelrasmussen@...gle.com, joelaf@...gle.com, minchan@...gle.com,
shakeelb@...gle.com, tatashin@...gle.com, edumazet@...gle.com,
gthelen@...gle.com, gurua@...gle.com, arjunroy@...gle.com,
soheil@...gle.com, hughlynch@...gle.com, leewalsh@...gle.com,
posk@...gle.com, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org, x86@...nel.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH 12/41] mm: add per-VMA lock and helper functions to
control it
On Tue 17-01-23 19:02:55, Jann Horn wrote:
> +locking maintainers
>
> On Mon, Jan 9, 2023 at 9:54 PM Suren Baghdasaryan <surenb@...gle.com> wrote:
> > Introduce a per-VMA rw_semaphore to be used during page fault handling
> > instead of mmap_lock. Because there are cases when multiple VMAs need
> > to be exclusively locked during VMA tree modifications, instead of the
> > usual lock/unlock patter we mark a VMA as locked by taking per-VMA lock
> > exclusively and setting vma->lock_seq to the current mm->lock_seq. When
> > mmap_write_lock holder is done with all modifications and drops mmap_lock,
> > it will increment mm->lock_seq, effectively unlocking all VMAs marked as
> > locked.
> [...]
> > +static inline void vma_read_unlock(struct vm_area_struct *vma)
> > +{
> > + up_read(&vma->lock);
> > +}
>
> One thing that might be gnarly here is that I think you might not be
> allowed to use up_read() to fully release ownership of an object -
> from what I remember, I think that up_read() (unlike something like
> spin_unlock()) can access the lock object after it's already been
> acquired by someone else.
Yes, I think you are right. From a look into the code it seems that
the UAF is quite unlikely as there is a ton of work to be done between
vma_write_lock used to prepare vma for removal and actual removal.
That doesn't make it less of a problem though.
> So if you want to protect against concurrent
> deletion, this might have to be something like:
>
> rcu_read_lock(); /* keeps vma alive */
> up_read(&vma->lock);
> rcu_read_unlock();
>
> But I'm not entirely sure about that, the locking folks might know better.
I am not a locking expert but to me it looks like this should work
because the final cleanup would have to happen rcu_read_unlock.
Thanks, I have completely missed this aspect of the locking when looking
into the code.
Btw. looking at this again I have fully realized how hard it is actually
to see that vm_area_free is guaranteed to sync up with ongoing readers.
vma manipulation functions like __adjust_vma make my head spin. Would it
make more sense to have a rcu style synchronization point in
vm_area_free directly before call_rcu? This would add an overhead of
uncontended down_write of course.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists