lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHUa44HBQatvnJ5Nkm=uWmnXX7xx_svjDt5d+fV6sosKAxppSg@mail.gmail.com>
Date:   Mon, 29 May 2023 14:23:16 +0200
From:   Jens Wiklander <jens.wiklander@...aro.org>
To:     Sumit Garg <sumit.garg@...aro.org>
Cc:     Etienne CARRIERE <etienne.carriere@...com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "op-tee@...ts.trustedfirmware.org" <op-tee@...ts.trustedfirmware.org>,
        "sudeep.holla@....com" <sudeep.holla@....com>,
        "cristian.marussi@....com" <cristian.marussi@....com>,
        "vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
        Etienne CARRIERE - foss <etienne.carriere@...s.st.com>
Subject: Re: [PATCH v9 3/4] tee: optee: support tracking system threads

On Mon, May 29, 2023 at 12:17 PM Sumit Garg <sumit.garg@...aro.org> wrote:
>
> On Mon, 29 May 2023 at 15:10, Jens Wiklander <jens.wiklander@...aro.org> wrote:
> >
> > On Mon, May 29, 2023 at 9:11 AM Sumit Garg <sumit.garg@...aro.org> wrote:
> > >
> > > On Fri, 26 May 2023 at 01:05, Etienne CARRIERE <etienne.carriere@...com> wrote:
> > > >
> > > >
> > > > > De: Jens Wiklander <jens.wiklander@...aro.org>
> > > > > Envoyé : jeudi 25 mai 2023 17:20
> > > > >
> > > > > Hi,
> > > > >
> > > > > On Thu, May 25, 2023 at 1:48 PM Etienne CARRIERE
> > > > > <etienne.carriere@...com> wrote:>
> > > > > >
> > > > > > >
> > > > > > > De : Sumit Garg <sumit.garg@...aro.org>
> > > > > > > Envoyé : mercredi 24 mai 2023 09:31
> > > > > > > > On Tue, 23 May 2023 at 12:41, Etienne Carriere
> > > > > > > <etienne.carriere@...aro.org> wrote:
> > > > > > > > Hello Sumit,
> > > > > > > >
> > > > > > > >
> > > > > > > > On Wed, 17 May 2023 at 16:33, Sumit Garg <sumit.garg@...aro.org> wrote:
> > > > > > > > >
> > > > > > > > > From: Etienne Carriere <etienne.carriere@...aro.org>
> > > > > > > > >
> > > > > > > > > Adds support in the OP-TEE driver to keep track of reserved system
> > > > > > > > > threads. The optee_cq_*() functions are updated to handle this if
> > > > > > > > > enabled. The SMC ABI part of the driver enables this tracking, but the
> > > > > > > > > FF-A ABI part does not.
> > > > > > > > >
> > > > > > > > > The logic allows atleast 1 OP-TEE thread can be reserved to TEE system
> > > > > > > > > sessions. For sake of simplicity, initialization of call queue
> > > > > > > > > management is factorized into new helper function optee_cq_init().
> > > > > > > > >
> > > > > > > > > Co-developed-by: Jens Wiklander <jens.wiklander@...aro.org>
> > > > > > > > > Signed-off-by: Jens Wiklander <jens.wiklander@...aro.org>
> > > > > > > > > Signed-off-by: Etienne Carriere <etienne.carriere@...aro.org>
> > > > > > > > > Co-developed-by: Sumit Garg <sumit.garg@...aro.org>
> > > > > > > > > Signed-off-by: Sumit Garg <sumit.garg@...aro.org>
> > > > > > > > > ---
> > > > > > > > >
> > > > > > > > > Disclaimer: Compile tested only
> > > > > > > > >
> > > > > > > > > Hi Etienne,
> > > > > > > > >
> > > > > > > > > Overall the idea we agreed upon was okay but the implementation looked
> > > > > > > > > complex to me. So I thought it would be harder to explain that via
> > > > > > > > > review and I decided myself to give a try at simplification. I would
> > > > > > > > > like you to test it if this still addresses the SCMI deadlock problem or
> > > > > > > > > not. Also, feel free to include this in your patchset if all goes fine
> > > > > > > > > wrt testing.
> > > > > > > >
> > > > > > > > With these changes, there is no more a specific waiting list for TEE
> > > > > > > > system threads hence when a waiting queue can complete, we'll pick any
> > > > > > > > TEE thread, not a TEE system thread first..
> > > > > > >
> > > > > > > I had thought about this but I can't see any value in having a
> > > > > > > separate wait queue for system threads. Here we only need to provide
> > > > > > > an extra privileged thread for system sessions (kernel clients) such
> > > > > > > that user-space doesn't contend for that thread. This prevents kernel
> > > > > > > client's starvation or deadlock like in the SCMI case.
> > > > > > >
> > > > > > > > Also, as stated in a below answer, these change unconditionally
> > > > > > > > reserve a TEE thread for TEE system calls even if no TEE client
> > > > > > > > reserved such.
> > > > > > >
> > > > > > > I don't think we should make thread reservations based on the presence
> > > > > > > of TEE clients. You never know how much user-space or kernel TEE
> > > > > > > clients you are dealing with. And reserving a single privileged thread
> > > > > > > unconditionally for system sessions shouldn't be much of a burden for
> > > > > > > memory constrained devices too.
> > > > > > >
> > > > > > > Also, this way we would enable every kernel TEE client to leverage
> > > > > > > system sessions as it's very likely they wouldn't like to compete with
> > > > > > > user-space for thread availability. Two other kernel TEE clients that
> > > > > > > are on top of my head are HWRNG and Trusted Keys which can benefit
> > > > > > > from this feature.
> > > > > >
> > > > > > Trusted Keys is an interesting use case. When OP-TEE accesses Trusted Keys,
> > > > > > it may need to access the eMMC/RPMB using the Linux OS tee-supplicant
> > > > > >  whichj may repuire an eMMC clock or voltage regulator to be enabled.
> > > > > > If that clock or regulator is under an SCMI control, then we need 2
> > > > > > reserved TEE thread: one for invoking the Trusted Key TA and
> > > > > > another for the SCMI request to reach the TEE will the Trusted Key
> > > > > > TA invocation still consumes a thread.
> > >
> > > Trusked keys TA doesn't need access to secure storage (eMMC/RPMB). It
> > > only requires a RNG and access to a key derived from HUK.
> >
> > Because it's always compiled as an early TA so no rollback protection is used?
> >
>
> Yeah it has to be compiled as an early TA to support Trusted Keys
> use-cases. BTW, we don't have enumeration support for REE-FS TAs at
> the moment.
>
> > >
> > > > > >
> > > > > Why would the Trusted Keys session need a system thread? To me, it
> > > > > seems that the session could use the normal client priority.
> > >
> > > The system thread priority as per my patch is nothing but an extra
> > > thread available in the thread pool for kernel clients as compared to
> > > user-space clients.
> > >
> > > Trusted keys use-case was really motivated by: "every kernel TEE
> > > client would like to avoid competing with user-space for thread
> > > availability". However, HWRNG has a real case that user-space
> > > shouldn't starve kernel RNG thread for OP-TEE thread availability.
> > >
> > > System thread can be useful for trusted keys in case the disk
> > > encryption key is backed by a trusted key.
> >
> > With well-behaving TAs every TEE client will get its thread in due time.
>
> We should try to keep distinction among user-space clients vs kernel
> clients rather than keeping them in the same bucket. The kernel
> clients are more privileged than user-space ones. This is similar
> hardening as we have done with respect to session login method (REE
> kernel login).
>
> >
> > The system thread feature was originally intended as a way of avoiding
> > a deadlock.
>
> That's true but doing so can also benefit other (mutual independent)
> kernel clients as well.
>
> > So far we have otherwise assigned threads on a first-come
> > first-served basis. If we now also need a way of giving priority to
> > kernel clients for less critical reasons we may need to take a step
> > back and redesign because reserving a thread for each kernel client
> > doesn't scale.
>
> No, please have a relook at this patch. We have *only* reserved a
> single thread for all the allowed (sess->use_sys_thread = true) kernel
> clients to compete for. And user-space has access to all the other
> threads on a first-come first-served basis except the one thread
> reserved for kernel clients.

Thanks, got it now. User space clients may or may not be able to
starve kernel clients enough to matter. However, if all kernel clients
only will be guaranteed one thread then we're still stuck with the
deadlock problem for SCMI. We can't guarantee that none of the kernel
clients will indirectly access eMMC/RPMB.

Cheers,
Jens

>
> -Sumit
>
> >
> > Thanks,
> > Jens

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ