[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150312193150.GB497@linaro.org>
Date: Thu, 12 Mar 2015 13:31:50 -0600
From: Lina Iyer <lina.iyer@...aro.org>
To: Bjorn Andersson <bjorn.andersson@...ymobile.com>
Cc: Ohad Ben-Cohen <ohad@...ery.com>, linux-arm-msm@...r.kernel.org,
Jeffrey Hugo <jhugo@...eaurora.org>,
Suman Anna <s-anna@...com>, Andy Gross <agross@...eaurora.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 2/2] hwspinlock: qcom: Add support for Qualcomm HW
Mutex block
On Fri, Feb 27 2015 at 15:30 -0700, Bjorn Andersson wrote:
>Add driver for Qualcomm Hardware Mutex block found in many Qualcomm
>SoCs.
>
>Based on initial effort by Kumar Gala <galak@...eaurora.org>
>
>Signed-off-by: Bjorn Andersson <bjorn.andersson@...ymobile.com>
>---
>
[...]
>+#include "hwspinlock_internal.h"
>+
>+#define QCOM_MUTEX_APPS_PROC_ID 1
Hi Bjorn,
Not all locks use 1 to indicate its locked. For example lock index 7 is
used by cpuidle driver between HLOS and SCM. It uses a write value of
(128 + smp_processor_id()) to lock.
A cpu acquires the remote spin lock, and calls into SCM to terminate the
power down sequence while passing the state of the L2. The lock help
guarantee the last core to hold the spinlock to have the most up to date
value for the L2 flush flag.
>+#define QCOM_MUTEX_NUM_LOCKS 32
Also, talking to Jeff it seems like that out of the 32 locks defined
only 8 is accessible from Linux. So its unnecessary and probably
incorrect to assume that there are 32 locks available.
Thanks,
Lina
>+{
>+ struct regmap_field *field = lock->priv;
>+ u32 lock_owner;
>+ int ret;
>+
>+ ret = regmap_field_write(field, QCOM_MUTEX_APPS_PROC_ID);
>+ if (ret)
>+ return ret;
>+
>+ ret = regmap_field_read(field, &lock_owner);
>+ if (ret)
>+ return ret;
>+
>+ return lock_owner == QCOM_MUTEX_APPS_PROC_ID;
>+}
>+
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists