[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220714010405.GB22183@quicinc.com>
Date: Wed, 13 Jul 2022 18:04:05 -0700
From: Guru Das Srinagesh <quic_gurus@...cinc.com>
To: Rajendra Nayak <quic_rjendra@...cinc.com>
CC: Andy Gross <agross@...nel.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Philipp Zabel <p.zabel@...gutronix.de>,
<linux-arm-msm@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
"David Heidelberg" <david@...t.cz>,
Robert Marko <robimarko@...il.com>,
Elliot Berman <quic_eberman@...cinc.com>
Subject: Re: [PATCH 4/5] firmware: qcom: scm: Add wait-queue helper functions
On Jul 01 2022 16:29, Rajendra Nayak wrote:
>
> On 6/28/2022 1:14 AM, Guru Das Srinagesh wrote:
> >When the firmware (FW) supports multiple requests per VM, and the VM also
> >supports it via the `enable-multi-call` device tree flag, the floodgates
> >are thrown open for them to all reach the firmware at the same time.
> >
> >Since the firmware currently being used has limited resources, it guards
> >them with a resource lock and puts requests on a wait-queue internally
> >and signals to HLOS that it is doing so. It does this by returning two
> >new return values in addition to success or error: SCM_WAITQ_SLEEP and
> >SCM_WAITQ_WAKE.
> >
> > 1) SCM_WAITQ_SLEEP:
> >
> > When an SCM call receives this return value instead of success
> > or error, FW has placed this call on a wait-queue and
> > has signalled HLOS to put it to non-interruptible sleep. (The
> > mechanism to wake it back up will be described in detail in the
> > next patch for the sake of simplicity.)
> >
> > Along with this return value, FW also passes to HLOS `wq_ctx` -
> > a unique number (UID) identifying the wait-queue that it has put
> > the call on, internally. This is to help HLOS with its own
> > bookkeeping to wake this sleeping call later.
> >
> > Additionally, FW also passes to HLOS `smc_call_ctx` - a UID
> > identifying the SCM call thus being put to sleep. This is also
> > for HLOS' bookkeeping to wake this call up later.
> >
> > These two additional values are passed via the a1 and a2
> > registers.
> >
> > N.B.: The "ctx" in the above UID names = "context".
> >
> > 2) SCM_WAITQ_WAKE:
> >
> > When an SCM call receives this return value instead of success
> > or error, FW wishes to signal HLOS to wake up a (different)
> > previously sleeping call.
>
> What happens to this SCM call itself (The one which gets an SCM_WAITQ_WAKE returned
> instead of a success or failure)?
> is it processed? how does the firmware in that case return a success or error?
Hopefully, with the clarificatory note posted in response to your query on the
other patch, this is clear. To answer your question:
Let's refer to the SCM call that received an SCM_WAITQ_WAKE as the parent call.
The parent call's success or failure depends on the result of the wq_wake_ack()
call defined below.
>
...
> > 3) wq_wake_ack(smc_call_ctx):
> >
> > Arguments: smc_call_ctx
> >
> > HLOS needs to issue this in response to receiving an
> > SCM_WAITQ_WAKE, passing to FW the same smc_call_ctx that FW
> > passed to HLOS via the SMC_WAITQ_WAKE call.
...
> >+
> > static int qcom_scm_probe(struct platform_device *pdev)
> > {
> > struct qcom_scm *scm;
> > unsigned long clks;
> >- int ret;
> >+ int irq, ret;
> > scm = devm_kzalloc(&pdev->dev, sizeof(*scm), GFP_KERNEL);
> > if (!scm)
> >@@ -1333,12 +1432,28 @@ static int qcom_scm_probe(struct platform_device *pdev)
> > if (ret)
> > return ret;
> >+ platform_set_drvdata(pdev, scm);
> >+
> > __scm = scm;
> > __scm->dev = &pdev->dev;
> >+ spin_lock_init(&__scm->waitq.idr_lock);
> >+ idr_init(&__scm->waitq.idr);
> > qcom_scm_allow_multicall = of_property_read_bool(__scm->dev->of_node,
> > "allow-multi-call");
> >+ INIT_WORK(&__scm->waitq.scm_irq_work, scm_irq_work);
> >+
> >+ irq = platform_get_irq(pdev, 0);
> >+ if (irq) {
> >+ ret = devm_request_threaded_irq(__scm->dev, irq, NULL,
> >+ qcom_scm_irq_handler, IRQF_ONESHOT, "qcom-scm", __scm);
> >+ if (ret < 0) {
> >+ pr_err("Failed to request qcom-scm irq: %d\n", ret);
>
> idr_destroy()?
Yes, will add in next patchset.
Powered by blists - more mailing lists