lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 22 Jan 2016 11:53:12 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Jason Low <jason.low2@...com>
Cc:	Waiman Long <waiman.long@....com>,
	Ding Tianhong <dingtianhong@...wei.com>,
	Ingo Molnar <mingo@...hat.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Davidlohr Bueso <dave@...olabs.net>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	"Paul E. McKenney" <paulmck@...ibm.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Will Deacon <Will.Deacon@....com>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	Waiman Long <Waiman.Long@...com>
Subject: Re: [PATCH RFC] locking/mutexes: don't spin on owner when wait list
 is not NULL.

On Fri, Jan 22, 2016 at 02:20:19AM -0800, Jason Low wrote:

> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -543,6 +543,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>  	lock_contended(&lock->dep_map, ip);
>  
>  	for (;;) {
> +		bool acquired = false;
> +
>  		/*
>  		 * Lets try to take the lock again - this is needed even if
>  		 * we get here for the first time (shortly after failing to
> @@ -577,7 +579,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>  		/* didn't get the lock, go to sleep: */
>  		spin_unlock_mutex(&lock->wait_lock, flags);
>  		schedule_preempt_disabled();
> +
> +		if (mutex_is_locked(lock))
> +			acquired = mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx);
>  		spin_lock_mutex(&lock->wait_lock, flags);
> +		if (acquired)
> +			break;
>  	}
>  	__set_task_state(task, TASK_RUNNING);

I think the problem here is that mutex_optimistic_spin() leaves the
mutex->count == 0, even though we have waiters (us at the very least).

But this should be easily fixed, since if we acquired, we should be the
one releasing, so there's no race.

So something like so:

		if (acquired) {
			atomic_set(&mutex->count, -1);
			break;
		}

Should deal with that -- we'll set it to 0 again a little further down
if the list ends up empty.


There might be other details, but this is the one that stood out.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ