lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 31 Mar 2019 14:14:13 -0400
From:   Waiman Long <>
To:     Jan Harkes <>
Cc:     Ingo Molnar <>,
        Peter Zijlstra <>,
        Alexander Viro <>,
        Pedro Cuadra Chamorro <>,
Subject: Re: fs/coda oops bisected to (925b9cd1b8) "locking/rwsem: Make owner
 store task pointer of last owni

On 03/31/2019 12:00 AM, Jan Harkes wrote:
> On Fri, Mar 29, 2019 at 05:53:22PM +0000, Waiman Long wrote:
>> On 03/29/2019 12:10 PM, Jan Harkes wrote:
>>> I knew I definitely had never seen this problem with the stable kernel
>>> on Ubuntu xenial (4.4) so I bisected between v4.4 and v5.1-rc2 and ended
>>> up at
>>>     # first bad commit: [925b9cd1b89a94b7124d128c80dfc48f78a63098]
>>>     # locking/rwsem: Make owner store task pointer of last owning reader
>>> When I revert this particular commit on 5.1-rc2, I am not able to
>>> reproduce the problem anymore.
>> Without CONFIG_DEBUG_RWSEMS, the only behavioral change of this patch is
>> to do an unconditional write of task_structure pointer into sem->owner
>> after acquiring the read lock in down_read(). Before this patch, it does
> I tried with just that change, but that is not at fault. It is also hard
> to believe we have a use-after-free issue, because we are using a
> spinlock on the inode that is held in place by the file we are releasing.

One possibility is that there is a previous reference to the memory
currently occupied by the spinlock. If the memory location is previously
part of a rwsem structure and someone is still using it, you may get
memory corruption.

> After trying various variations the minimal change that fixes the soft
> lockup is as follows. Without this patch I get a reliable lockup, with
> the patch everything works as expected.
>     diff --git a/kernel/locking/rwsem.h b/kernel/locking/rwsem.h
>     index bad2bca..0cc437d 100644
>     --- a/kernel/locking/rwsem.h
>     +++ b/kernel/locking/rwsem.h
>     @@ -61,8 +61,7 @@ static inline void rwsem_clear_owner(struct rw_semaphore *sem)
>      static inline void __rwsem_set_reader_owned(struct rw_semaphore *sem,
>                                                 struct task_struct *owner)
>      {
>     -	unsigned long val = (unsigned long)owner | RWSEM_READER_OWNED
>             WRITE_ONCE(sem->owner, (struct task_struct *)val);
>      }

The change above clears all the upper bytes of owner to 0, while the
original code will write some non-zero values to the upper bytes.

> I'll continue digging if I can find a reason why. So far I've only found
> one place where rwsem->owner is modified while not holding a lock, but
> changing that doesn't make a difference for my particular case.
>     diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
>     index 03cb4b6f842e..fe696a8b57f3 100644
>     --- a/include/linux/percpu-rwsem.h
>     +++ b/include/linux/percpu-rwsem.h
>     @@ -114,11 +114,11 @@ extern void percpu_free_rwsem(struct percpu_rw_semaphore *);
>      static inline void percpu_rwsem_release(struct percpu_rw_semaphore *sem,
>                                             bool read, unsigned long ip)
>      {
>     -	lock_release(&sem->rw_sem.dep_map, 1, ip);
>             if (!read)
>                     sem->rw_sem.owner = RWSEM_OWNER_UNKNOWN;
>      #endif
>     +	lock_release(&sem->rw_sem.dep_map, 1, ip);
>      }
>      static inline void percpu_rwsem_acquire(struct percpu_rw_semaphore *sem,
> Jan

The rwsem is still locked. The above code releases the current task from
being the lock holder of the rwsem from lockdep's perspective. Later on
some other task will take over the ownership of the rwsem without
actually releasing the lock in the interim. So I don't think this code
is part of the problem.


Powered by blists - more mailing lists