[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8762wyyv99.fsf@deeprootsystems.com>
Date: Tue, 19 Oct 2010 09:58:42 -0700
From: Kevin Hilman <khilman@...prootsystems.com>
To: Ohad Ben-Cohen <ohad@...ery.com>
Cc: <linux-omap@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<akpm@...ux-foundation.org>, Greg KH <greg@...ah.com>,
Tony Lindgren <tony@...mide.com>,
Benoit Cousson <b-cousson@...com>,
Grant Likely <grant.likely@...retlab.ca>,
Hari Kanigeri <h-kanigeri2@...com>, Suman Anna <s-anna@...com>,
Simon Que <sque@...com>,
"Krishnamoorthy\, Balaji T" <balajitk@...com>
Subject: Re: [PATCH 1/3] drivers: misc: add omap_hwspinlock driver
Ohad Ben-Cohen <ohad@...ery.com> writes:
> From: Simon Que <sque@...com>
>
> Add driver for OMAP's Hardware Spinlock module.
>
> The OMAP Hardware Spinlock module, initially introduced in OMAP4,
> provides hardware assistance for synchronization between the
> multiple processors in the device (Cortex-A9, Cortex-M3 and
> C64x+ DSP).
[...]
> +/**
> + * omap_hwspin_trylock() - attempt to lock a specific hwspinlock
> + * @hwlock: a hwspinlock which we want to trylock
> + * @flags: a pointer to where the caller's interrupt state will be saved at
> + *
> + * This function attempt to lock the underlying hwspinlock. Unlike
> + * hwspinlock_lock, this function will immediately fail if the hwspinlock
> + * is already taken.
> + *
> + * Upon a successful return from this function, preemption and interrupts
> + * are disabled, so the caller must not sleep, and is advised to release
> + * the hwspinlock as soon as possible. This is required in order to minimize
> + * remote cores polling on the hardware interconnect.
> + *
> + * This function can be called from any context.
> + *
> + * Returns 0 if we successfully locked the hwspinlock, -EBUSY if
> + * the hwspinlock was already taken, and -EINVAL if @hwlock is invalid.
> + */
> +int omap_hwspin_trylock(struct omap_hwspinlock *hwlock, unsigned long *flags)
> +{
> + u32 ret;
> +
> + if (IS_ERR_OR_NULL(hwlock)) {
> + pr_err("invalid hwlock\n");
> + return -EINVAL;
> + }
> +
> + /*
> + * This spin_trylock_irqsave serves two purposes:
> +
> + * 1. Disable local interrupts and preemption, in order to
> + * minimize the period of time in which the hwspinlock
> + * is taken (so caller will not preempted). This is
> + * important in order to minimize the possible polling on
> + * the hardware interconnect by a remote user of this lock.
> + *
> + * 2. Make this hwspinlock primitive SMP-safe (so we can try to
> + * take it from additional contexts on the local cpu)
> + */
3. Ensures that in_atomic/might_sleep checks catch potential problems
with hwspinlock usage (e.g. scheduler checks like 'scheduling while
atomic' etc.)
> + if (!spin_trylock_irqsave(&hwlock->lock, *flags))
> + return -EBUSY;
> +
> + /* attempt to acquire the lock by reading its value */
> + ret = readl(hwlock->addr);
> +
> + /* lock is already taken */
> + if (ret == SPINLOCK_TAKEN) {
> + spin_unlock_irqrestore(&hwlock->lock, *flags);
> + return -EBUSY;
> + }
> +
> + /*
> + * We can be sure the other core's memory operations
> + * are observable to us only _after_ we successfully take
> + * the hwspinlock, so we must make sure that subsequent memory
> + * operations will not be reordered before we actually took the
> + * hwspinlock.
> + * Note: the implicit memory barrier of the spinlock above is too
> + * early, so we need this additional explicit memory barrier.
> + */
> + mb();
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(omap_hwspin_trylock);
[...]
> +/**
> + * omap_hwspinlock_unlock() - unlock a specific hwspinlock
minor nit: s/lock_unlock/_unlock/ to match name below
> + * @hwlock: a previously-acquired hwspinlock which we want to unlock
> + * @flags: a pointer to the caller's saved interrupts state
> + *
> + * This function will unlock a specific hwspinlock, enable preemption and
> + * restore the interrupts state. @hwlock must be taken (by us!) before
> + * calling this function: it is a bug to call unlock on a @hwlock that was
> + * not taken by us, i.e. using one of omap_hwspin_{lock trylock, lock_timeout}.
> + *
> + * This function can be called from any context.
> + *
> + * Returns 0 when the @hwlock on success, or -EINVAL if @hwlock is invalid.
> + */
> +int omap_hwspin_unlock(struct omap_hwspinlock *hwlock, unsigned long *flags)
> +{
> + if (IS_ERR_OR_NULL(hwlock)) {
> + pr_err("invalid hwlock\n");
> + return -EINVAL;
> + }
> +
> + /*
> + * We must make sure that memory operations, done before unlocking
> + * the hwspinlock, will not be reordered after the lock is released.
> + * The memory barrier induced by the spin_unlock below is too late:
> + * the other core is going to access memory soon after it will take
> + * the hwspinlock, and by then we want to be sure our memory operations
> + * were already observable.
> + */
> + mb();
> +
> + /* release the lock by writing 0 to it (NOTTAKEN) */
> + writel(SPINLOCK_NOTTAKEN, hwlock->addr);
> +
> + /* undo the spin_trylock_irqsave called in the locking function */
> + spin_unlock_irqrestore(&hwlock->lock, *flags);
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(omap_hwspin_unlock);
[...]
Kevin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists