lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 19 May 2015 15:13:22 -0500
From:	Andy Gross <agross@...eaurora.org>
To:	Lina Iyer <lina.iyer@...aro.org>
Cc:	Ohad Ben-Cohen <ohad@...ery.com>, "Anna, Suman" <s-anna@...com>,
	Bjorn Andersson <Bjorn.Andersson@...ymobile.com>,
	"linux-arm-msm@...r.kernel.org" <linux-arm-msm@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Kumar Gala <galak@...eaurora.org>,
	Jeffrey Hugo <jhugo@...eaurora.org>
Subject: Re: [PATCH RFC] hwspinlock: Don't take software spinlock before
 hwspinlock

On Mon, May 18, 2015 at 09:03:02AM -0600, Lina Iyer wrote:
> On Sat, May 16 2015 at 03:03 -0600, Ohad Ben-Cohen wrote:
> >On Mon, May 11, 2015 at 5:46 PM, Lina Iyer <lina.iyer@...aro.org> wrote:
> >>On Sat, May 09 2015 at 03:25 -0600, Ohad Ben-Cohen wrote:
> >>>On Fri, May 1, 2015 at 8:07 PM, Lina Iyer <lina.iyer@...aro.org> wrote:
> >>>Let's discuss whether we really want to expose this functionality
> >>>under the same hwspinlock API or not.
> >>>
> >>>In this new mode, unlike previously, users will now be able to sleep
> >>>after taking the lock, and others trying to take the lock might poll
> >>>the hardware for a long period of time without the ability to sleep
> >>>while waiting for the lock. It almost sounds like you were looking for
> >>>some hwmutex functionality.
> >>
> >>I agree, that it opens up a possiblity that user may sleep after holding
> >>a hw spinlock.  But really, why should it prevents us from using it as a
> >>hw mutex, if the need is legitimate?
> >
> >If we want hw mutex functionality, let's discuss how to expose it.
> >Exposing it using the existing hw spinlock API might not be ideal, as
> >users might get confused.
> >
> >Additionally, there are hardware IP locking blocks out there which
> >encourage users to sleep while waiting for a lock, by providing
> >interrupt functionality to wake them up when the lock is freed. So if
> >we choose to add a hw mutex API it might be used by others in the
> >future too (though this reason alone is not why we would choose to add
> >it now of course).
> >
> Okay, the API seems to want to dictate what kind of flags be specified
> for __try_lock(), FLAG_NONE, in my mind, seems to fall into the same
> classification. But sure, we can discuss a different form of achieving
> the same thing.
> 
> Do you have any ideas?

So let's say we had this hwmutex API.  Are you advocating that we separate out
the hardware spinlock into hwmutex and then make calls to acquire/release the
hwmutex in the hwspinlock?  And then we'd use the hwmutex acquire/release when
we don't want the wrapped sw spinlock.

Seems like a lot of trouble when all we want is a behavior change on the use of
the sw spinlock.

> 
> >API discussions aside, what do you want to happen in your scenario
> >while the lock is taken? are you OK with other users spinning on the
> >lock waiting for it to be released? IIUC that might mean processors
> >spinning for a non-negligible period of time?
> >
> The lock in question is used differently than traditional locks across
> processors. This lock helps synchronizes context transition from
> non-secure to secure on the same processor.
> 
> The usecase, goes like this. In cpuidle, any core can be the last core
> to power down. The last man also holds the responsibility of shutting
> down shared resources like caches etc. The way the power down of a core
> works is, there are some high level decisions made in Linux and these
> decisions (like to flush and invalidate caches) etc gets transferred
> over to the the secure layer. The secure layer executes the ARM WFI that
> powers down the cpu, but uses these decisions passed into to determine
> if the cache needs to be invalidated upon wakeup etc.
> 
> There is a possible race condition between what Linux thinks is the last
> core, vs what secure layer thinks is the last core. Lets say, two cores
> c0, c1 are going down. c1 is the second last core to go down from Linux
> as such, will not carry information about shared resources when making
> the SCM call. c1  made the SCM call, but is stuck handling some FIQs. In
> the meanwhile c0, goes idle and since its the last core in Linux,
> figures out the state of the shared resources. c0 calls into SCM, and
> ends up powering down earlier than c1. Per secure layer, the last core
> to go down is c1 and the votes of the shared resources are considered
> from that core. Things like cache invalidation without flush may happen
> as a result of this inconsistency of last man view point.
> 
> The way we have solved it, Linux acquires a hw spinlock for each core,
> when calling into SCM and the secure monitor releases the spinlock. At
> any given time, only one core can switch the context from Linux to
> secure for power down operations. This guarantees the last man is
> synchronized between both Linux and secure. Another core may be spinning
> waiting for hw mutex, but they all happen serialized. This mutex is held
> in an irq disable context in cpuidle.
> 
> There may be another processor spining to wait on hw mutex, but there
> isnt much to do otherwise, because the only operation at this time while
> holding the lock is to call into SCM and that would unlock the mutex.

In this use case you have an asymmetric use of the APIs. lock but no unlock.
And this breaks the sw spinlock usage.

-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ