lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1433867020-7746-3-git-send-email-lina.iyer@linaro.org>
Date:	Tue,  9 Jun 2015 10:23:40 -0600
From:	Lina Iyer <lina.iyer@...aro.org>
To:	ohad@...ery.com
Cc:	linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org,
	Lina Iyer <lina.iyer@...aro.org>,
	Jeffrey Hugo <jhugo@...eaurora.org>,
	Bjorn Andersson <bjorn.andersson@...ymobile.com>,
	Andy Gross <agross@...eaurora.org>
Subject: [PATCH RFC v2 2/2] hwspinlock: qcom: Lock #7 is special lock, uses dynamic proc_id

Hwspinlocks are widely used between processors in an SoC, and also
between elevation levels within in the same processor.  QCOM SoC's use
hwspinlock to serialize entry into a low power mode when the context
switches from Linux to secure monitor.

Lock #7 has been assigned for this purpose. In order to differentiate
between one cpu core holding a lock while another cpu is contending for
the same lock, the proc id written into the lock is (128 + cpu id). This
makes it unique value among the cpu cores and therefore when a core
locks the hwspinlock, other cores would wait for the lock to be released
since they would have a different proc id.  This value is specific for
the lock #7 only.

Declare lock #7 as raw capable, so the hwspinlock framework would not
enfore acquiring a s/w spinlock before acquiring the hwspinlock.

Cc: Jeffrey Hugo <jhugo@...eaurora.org>
Cc: Bjorn Andersson <bjorn.andersson@...ymobile.com>
Cc: Andy Gross <agross@...eaurora.org>
Signed-off-by: Lina Iyer <lina.iyer@...aro.org>
---
 drivers/hwspinlock/qcom_hwspinlock.c | 22 +++++++++++++++++-----
 1 file changed, 17 insertions(+), 5 deletions(-)

diff --git a/drivers/hwspinlock/qcom_hwspinlock.c b/drivers/hwspinlock/qcom_hwspinlock.c
index 93b62e0..59278b0 100644
--- a/drivers/hwspinlock/qcom_hwspinlock.c
+++ b/drivers/hwspinlock/qcom_hwspinlock.c
@@ -25,16 +25,26 @@
 
 #include "hwspinlock_internal.h"
 
-#define QCOM_MUTEX_APPS_PROC_ID	1
-#define QCOM_MUTEX_NUM_LOCKS	32
+#define QCOM_MUTEX_APPS_PROC_ID		1
+#define QCOM_MUTEX_CPUIDLE_OFFSET	128
+#define QCOM_CPUIDLE_LOCK		7
+#define QCOM_MUTEX_NUM_LOCKS		32
+
+static inline u32 __qcom_get_proc_id(struct hwspinlock *lock)
+{
+	return hwspin_lock_get_id(lock) == QCOM_CPUIDLE_LOCK ?
+			(QCOM_MUTEX_CPUIDLE_OFFSET + smp_processor_id()) :
+			QCOM_MUTEX_APPS_PROC_ID;
+}
 
 static int qcom_hwspinlock_trylock(struct hwspinlock *lock)
 {
 	struct regmap_field *field = lock->priv;
 	u32 lock_owner;
 	int ret;
+	u32 proc_id = __qcom_get_proc_id(lock);
 
-	ret = regmap_field_write(field, QCOM_MUTEX_APPS_PROC_ID);
+	ret = regmap_field_write(field, proc_id);
 	if (ret)
 		return ret;
 
@@ -42,7 +52,7 @@ static int qcom_hwspinlock_trylock(struct hwspinlock *lock)
 	if (ret)
 		return ret;
 
-	return lock_owner == QCOM_MUTEX_APPS_PROC_ID;
+	return lock_owner == proc_id;
 }
 
 static void qcom_hwspinlock_unlock(struct hwspinlock *lock)
@@ -57,7 +67,7 @@ static void qcom_hwspinlock_unlock(struct hwspinlock *lock)
 		return;
 	}
 
-	if (lock_owner != QCOM_MUTEX_APPS_PROC_ID) {
+	if (lock_owner != __qcom_get_proc_id(lock)) {
 		pr_err("%s: spinlock not owned by us (actual owner is %d)\n",
 				__func__, lock_owner);
 	}
@@ -129,6 +139,8 @@ static int qcom_hwspinlock_probe(struct platform_device *pdev)
 							     regmap, field);
 	}
 
+	bank->lock[QCOM_CPUIDLE_LOCK].hwcaps = HWL_CAP_ALLOW_RAW;
+
 	pm_runtime_enable(&pdev->dev);
 
 	ret = hwspin_lock_register(bank, &pdev->dev, &qcom_hwspinlock_ops,
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ