lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 18 Mar 2015 10:45:32 -0600
From:	Lina Iyer <lina.iyer@...aro.org>
To:	Bjorn Andersson <bjorn.andersson@...ymobile.com>
Cc:	Ohad Ben-Cohen <ohad@...ery.com>,
	"linux-arm-msm@...r.kernel.org" <linux-arm-msm@...r.kernel.org>,
	Jeffrey Hugo <jhugo@...eaurora.org>,
	Suman Anna <s-anna@...com>, Andy Gross <agross@...eaurora.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v6 2/2] hwspinlock: qcom: Add support for Qualcomm HW
 Mutex block

On Wed, Mar 18 2015 at 09:56 -0600, Bjorn Andersson wrote:
>On Thu 12 Mar 12:31 PDT 2015, Lina Iyer wrote:
>
>> On Fri, Feb 27 2015 at 15:30 -0700, Bjorn Andersson wrote:
>> >Add driver for Qualcomm Hardware Mutex block found in many Qualcomm
>> >SoCs.
>> >
>> >Based on initial effort by Kumar Gala <galak@...eaurora.org>
>> >
>> >Signed-off-by: Bjorn Andersson <bjorn.andersson@...ymobile.com>
>> >---
>> >
>>
>> [...]
>>
>> >+#include "hwspinlock_internal.h"
>> >+
>> >+#define QCOM_MUTEX_APPS_PROC_ID	1
>> Hi Bjorn,
>>
>> Not all locks use 1 to indicate its locked. For example lock index 7 is
>> used by cpuidle driver between HLOS and SCM. It uses a write value of
>> (128 + smp_processor_id()) to lock.
>>
>
>In other words, it's a magic number that will make sure that not more
>than one cpu enters TZ sleep code at a time.
>
Right, its a magic number of sorts.

>> A cpu acquires the remote spin lock, and calls into SCM to terminate the
>> power down sequence while passing the state of the L2. The lock help
>> guarantee the last core to hold the spinlock to have the most up to date
>> value for the L2 flush flag.
>>
>
>Yeah, I remember having to dig out the deadlock related to the
>introduction of that logic on my side (turned out to have an old TZ).
>
>There's already mutual exclusion and reference counting within TZ to
>make sure we're not turning off the caches unless this is the last core
>going down.

Yes, there is. But the perception of the last core in Linux and the last
core going down in TZ may be incorrect. Say for example, two cpus are
going down from linux - cpu0 & cpu1. cpu0 was the last core calling into
TZ from Linux and cpu1 had already done so. However, cpu1 started
handling an FIQ and therefore was blocked doing that while cpu0, went
through TZ. When each cpu calls into TZ, we provide the TZ with the L2
flush flag so as to allow TZ to also flush its secure lines before
powering the L2 down. The L2 flush flag that the cpu submits is its own
version of the system state. To get TZ to recognize the last valid l2
flush flag value from Linux, we need the hwmutex.

>I presume that the reason behind the hwmutex logic is to make sure that
>with multiple cores racing to sleep only one of them will flush the
>caches in Linux and will be the last entering TZ. Can you confirm this?
>
Its more for passing the flush flag than flushing the cache itself per-se.

>> >+#define QCOM_MUTEX_NUM_LOCKS	32
>>
>> Also, talking to Jeff it seems like that out of the 32 locks defined
>> only 8 is accessible from Linux. So its unnecessary and probably
>> incorrect to assume that there are 32 locks available.
>>
>
>The hardware block have 32 locks and the decision regarding which locks
>this particular Linux system is allowed to access is configuration.
>
Understood. But while the hardware may support it, it may be right for
Linux to be allowed to configure, giving a false sense of number of
locks.

>Regards,
>Bjorn
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ