lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 28 Jul 2014 20:41:39 -0700
From:	Jason Low <jason.low2@...com>
To:	Davidlohr Bueso <davidlohr@...com>
Cc:	peterz@...radead.org, mingo@...nel.org, aswin@...com,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH -tip/master v2] locking/mutex: Refactor optimistic
 spinning code

On Mon, 2014-07-28 at 19:55 -0700, Davidlohr Bueso wrote:
> +static bool mutex_optimistic_spin(struct mutex *lock,
> +				  struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
> +{
> +	struct task_struct *task = current;
> +
> +	if (!mutex_can_spin_on_owner(lock))
> +		return false;
> +
> +	if (!osq_lock(&lock->osq))
> +		return false;

In the !osq_lock() case, we could exit the cancellable MCS spinlock due
to need_resched(). However, this would return from the function rather
than doing the need_resched() check below. Perhaps we can add something
like "goto out" which goes to the below check?

The mutex_can_spin_on_owner() also returns false if need_resched().

> +	while (true) {
> +		struct task_struct *owner;
> +
> +		if (use_ww_ctx && ww_ctx->acquired > 0) {
> +			struct ww_mutex *ww;
> +
> +			ww = container_of(lock, struct ww_mutex, base);
> +			/*
> +			 * If ww->ctx is set the contents are undefined, only
> +			 * by acquiring wait_lock there is a guarantee that
> +			 * they are not invalid when reading.
> +			 *
> +			 * As such, when deadlock detection needs to be
> +			 * performed the optimistic spinning cannot be done.
> +			 */
> +			if (ACCESS_ONCE(ww->ctx))
> +				break;
> +		}
> +
> +		/*
> +		 * If there's an owner, wait for it to either
> +		 * release the lock or go to sleep.
> +		 */
> +		owner = ACCESS_ONCE(lock->owner);
> +		if (owner && !mutex_spin_on_owner(lock, owner))
> +			break;
> +
> +		/* Try to acquire the mutex if it is unlocked. */
> +		if (mutex_try_to_acquire(lock)) {
> +			if (use_ww_ctx) {
> +				struct ww_mutex *ww;
> +				ww = container_of(lock, struct ww_mutex, base);
> +
> +				ww_mutex_set_context_fastpath(ww, ww_ctx);
> +			}
> +
> +			mutex_set_owner(lock);
> +			osq_unlock(&lock->osq);
> +			return true;
> +		}
> +
> +		/*
> +		 * When there's no owner, we might have preempted between the
> +		 * owner acquiring the lock and setting the owner field. If
> +		 * we're an RT task that will live-lock because we won't let
> +		 * the owner complete.
> +		 */
> +		if (!owner && (need_resched() || rt_task(task)))
> +			break;
> +
> +		/*
> +		 * The cpu_relax() call is a compiler barrier which forces
> +		 * everything in this loop to be re-loaded. We don't need
> +		 * memory barriers as we'll eventually observe the right
> +		 * values at the cost of a few extra spins.
> +		 */
> +		cpu_relax_lowlatency();
> +	}
> +
> +	osq_unlock(&lock->osq);
> +
> +	/*
> +	 * If we fell out of the spin path because of need_resched(),
> +	 * reschedule now, before we try-lock the mutex. This avoids getting
> +	 * scheduled out right after we obtained the mutex.
> +	 */
> +	if (need_resched())
> +		schedule_preempt_disabled();
> +
> +	return false;
> +}


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ