lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez1jT0T69t62wrduEWLSwY0UZpm0CwK4tC3uTPiWJ-powg@mail.gmail.com>
Date:   Fri, 1 Dec 2023 19:40:06 +0100
From:   Jann Horn <jannh@...gle.com>
To:     David Laight <David.Laight@...lab.com>
Cc:     Jens Axboe <axboe@...nel.dk>,
        Pavel Begunkov <asml.silence@...il.com>,
        io-uring <io-uring@...r.kernel.org>,
        kernel list <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
        Waiman Long <longman@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: mutex/spinlock semantics [was: Re: io_uring: incorrect assumption
 about mutex behavior on unlock?]

On Fri, Dec 1, 2023 at 7:30 PM David Laight <David.Laight@...lab.com> wrote:
>
> From: Jann Horn
> > Sent: 01 December 2023 16:41
> >
> > mutex_unlock() has a different API contract compared to spin_unlock().
> > spin_unlock() can be used to release ownership of an object, so that
> > as soon as the spinlock is unlocked, another task is allowed to free
> > the object containing the spinlock.
> > mutex_unlock() does not support this kind of usage: The caller of
> > mutex_unlock() must ensure that the mutex stays alive until
> > mutex_unlock() has returned.
>
> The problem sequence might be:
>         Thread A                Thread B
>         mutex_lock()
>                                 code to stop mutex being requested
>                                 ...
>                                 mutex_lock() - sleeps
>         mutex_unlock()...
>                 Waiters woken...
>                 isr and/or pre-empted
>                                 - wakes up
>                                 mutex_unlock()
>                                 free()
>                 ... more kernel code access the mutex
>                 BOOOM
>
> What happens in a PREEMPT_RT kernel where most of the spin_unlock()
> get replaced by mutex_unlock().
> Seems like they can potentially access a freed mutex?

RT spinlocks don't use mutexes, they use rtmutexes, and I think those
explicitly support this usecase. See the call path:

spin_unlock -> rt_spin_unlock -> rt_mutex_slowunlock

rt_mutex_slowunlock() has a comment, added in commit 27e35715df54
("rtmutex: Plug slow unlock race"):

         * We must be careful here if the fast path is enabled. If we
         * have no waiters queued we cannot set owner to NULL here
         * because of:
         *
         * foo->lock->owner = NULL;
         *                      rtmutex_lock(foo->lock);   <- fast path
         *                      free = atomic_dec_and_test(foo->refcnt);
         *                      rtmutex_unlock(foo->lock); <- fast path
         *                      if (free)
         *                              kfree(foo);
         * raw_spin_unlock(foo->lock->wait_lock);

That commit also explicitly refers to wanting to support this pattern
with spin_unlock() in the commit message.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ