[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1ce601f3-3915-fcf7-df27-746a16ba7b37@gmail.com>
Date: Thu, 15 Apr 2021 17:03:58 +0500
From: Nikita Travkin <nikitos.tr@...il.com>
To: Bjorn Andersson <bjorn.andersson@...aro.org>
Cc: Rob Herring <robh@...nel.org>, agross@...nel.org,
linux-arm-msm@...r.kernel.org, devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] dt-bindings: soc: qcom: Add bindings for Qualcomm
Memshare service
14.04.2021 08:15, Bjorn Andersson пишет:
> On Sat 10 Apr 03:05 CDT 2021, Nikita Travkin wrote:
>
>> Hi, sorry for a late reply but I couldn't answer earlier.
>>
>> 30.03.2021 19:40, Rob Herring ??????????:
>>> On Fri, Mar 19, 2021 at 10:23:20PM +0500, nikitos.tr@...il.com wrote:
>>>> From: Nikita Travkin <nikitos.tr@...il.com>
>>>>
>>>> Add DT bindings for memshare: QMI service that allocates
>>>> memory per remote processor request.
>>>>
>>>> Signed-off-by: Nikita Travkin <nikitos.tr@...il.com>
>>>> ---
>>>> .../bindings/soc/qcom/qcom,memshare.yaml | 109 ++++++++++++++++++
>>>> include/dt-bindings/soc/qcom,memshare.h | 10 ++
>>>> 2 files changed, 119 insertions(+)
>>>> create mode 100644 Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
>>>> create mode 100644 include/dt-bindings/soc/qcom,memshare.h
>>>>
>>>> diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml b/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
>>>> new file mode 100644
>>>> index 000000000000..ebdf128b066c
>>>> --- /dev/null
>>>> +++ b/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
>>>> @@ -0,0 +1,109 @@
>>>> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
>>>> +%YAML 1.2
>>>> +---
>>>> +$id: "http://devicetree.org/schemas/soc/qcom/qcom,memshare.yaml#"
>>>> +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
>>>> +
>>>> +title: Qualcomm QMI Shared Memory Service
>>> How many shared memory interfaces does Qcom have...
>>>
>>>> +
>>>> +description: |
>>>> + This driver provides a QMI service that allows remote processors (like modem)
>>>> + to request additional memory. It is used for applications like GPS in modem.
>>> If the memory region is defined in reserved-memory, how are you
>>> allocating additional memory?
>> Initially remoteproc is loaded into it's own reserved-memory region
>> but qcom decided that they sometimes need more memory than that.
>> Memshare driver in msm8916 downstream tree seem to blindly allocate
>> DMA region for every request that it gets. Additionally for those
>> clients described in the DT, they do the DMA allocation on boot
>> time and never free the region. They call it "guaranteed" allocation.
>>
>> On msm8916 only one "guaranteed" client seem to be used so I decided
>> to implement it with reserved-memory node. On newer platforms they
>> seem to have more clients but I think that the driver can be easily
>> extended to support dynamic allocation if someone really needs it.
>>
> Is the "guaranteed" memory required to come from the reserved-memory
> part of memory, or could it simply be allocated on demand as well (or
> preallocated, but at a dynamic address)?
This is rather complicated.
For most (msm8916) devices it works with a region from dma_alloc but
there are at least three devices where it causes problems.
If the region was allocated by dma_alloc (somewhere near 0xfe100000
if I remember correctly) then
- Wileyfox Swift (Longcheer L8150): Location service "crashes"
(repeats request every 10-ish seconds while location session is
open and gives no location data)
- Samsung A3, A5: The entire modem crashes after it gets the response
with such address.
Downstream kernel allocates the region at slightly different address
which works fine.
It's probably possible to change the allocation address with dma mask
but I have no idea why the crash happens on those devices, if it's
even possible to debug this or find all the "bad" regions.
Because of that I prefer using a known-good address at least for the
location client that keeps it forever.
> If these allocations always came from a reserved-memory region, then
> adding a "qcom,memshare" compatible to the reserved-memory node itself
> seems like a reasonable approach. But if dma_alloc is sufficient, and
> there's cases where there's no "guaranteed" region, perhaps we should
> just describe this as part of the remoteproc node (i.e. essentially
> flipping the node/subnode in your current binding).
>
>
> E.g. can we get away with simply adding an optional qcom,memshare-node
> to the remoteproc binding and when that's present we make the Qualcomm
> remoteproc drivers spawn the memshare handler and listen for requests
> from that node?
I'm having a hard time imagining how this would be implemented...
Assuming that I need to keep reserved-memory for some clients
and other clients will need proper dynamic allocation (maybe on a
newer platform), there will be an optional subnode in each remoteproc
node, that will be detected by the remoteproc driver which will then
start memshare. It will have all the id-s, size or phandle to
reserved-memory. Then there may be multiple of those in one
remoteproc. Or one memshare subnode will contain multiple client
nodes? Then if I understand correctly, there are multiple different
remoteproc drivers so each will have to be modified. They will need
to spawn only one memshare instance and pass the clients to it.
Or maybe the subnode can contain a compatible and the code to keep
only one instance will be in the memshare driver itself...
To be honest, I'm getting very confused by even trying to lay this
down in my mind. I think it just unnecessarily complicates both
binding and the driver to "hide" it's nodes through the device tree.
Maybe I just didn't understand the proposal...
>> I tried to explain that in the cover letter but I think I made some
>> mistake as I don't see it in the Patchwork.
>>
>>>> +
>>>> +maintainers:
>>>> + - Nikita Travkin <nikitos.tr@...il.com>
>>>> +
>>>> +properties:
>>>> + compatible:
>>>> + const: qcom,memshare
>>>> +
>>>> + qcom,legacy-client:
>>>> + $ref: /schemas/types.yaml#/definitions/phandle
>>>> + description: Phandle to a memshare client node used for legacy requests.
>>>> +
>>>> + "#address-cells":
>>>> + const: 1
>>>> +
>>>> + "#size-cells":
>>>> + const: 0
>>>> +
>>>> +patternProperties:
>>>> + "^.*@[0-9]+$":
>>>> + type: object
>>>> +
>>>> + properties:
>>>> + reg:
>>>> + description: Proc-ID for clients in this node.
>>> What's Proc-ID?
>> The requests from the remote nodes contain client-id and proc-id
>> that are supposed to differentiate the clients. It's possible to
>> find the values in downstream DT or by observing what messages
>> are received by the memshare service (I left dev_dbg logging in
>> the driver for that reason)
>>
>> I think I should reword it to make this more apparent, maybe
>> "Proc-ID that clients in this node send."?
>>
> If this is a constant for each remote and we make this a child thing of
> remoteproc perhaps encode the number in the remoteproc nodes?
>
> (We still need something in DT to state that we want a memshare for
> a given platform/remoteproc)
>
>>>> +
>>>> + qcom,qrtr-node:
>>>> + $ref: /schemas/types.yaml#/definitions/uint32
>>>> + description: Node from which the requests are expected.
>>>> +
>>>> + "#address-cells":
>>>> + const: 1
>>>> +
>>>> + "#size-cells":
>>>> + const: 0
>>>> +
>>>> + patternProperties:
>>>> + "^.*@[0-9]+$":
>>>> + type: object
>>>> +
>>>> + properties:
>>>> + reg:
>>>> + description: ID of this client.
>>> How does one determine the ID?
>> As with proc-id, maybe reword to "ID that this client sends."?
>>
>> I will change those in v2, I still expect comments on the driver
>> itself, so I'll wait for that before submitting it with just a
>> couple lines changed.
>>
>>>> +
>>>> + memory-region:
>>>> + $ref: /schemas/types.yaml#/definitions/phandle
>>>> + description: |
>>>> + Reserved memory region that should be used for allocation.
>>>> +
>>>> + required:
>>>> + - reg
>>>> +
>>>> + required:
>>>> + - reg
>>>> + - qcom,qrtr-node
>>>> +
>>>> +required:
>>>> + - compatible
>>>> +
>>>> +additionalProperties: false
>>>> +
>>>> +examples:
>>>> + - |
>>>> + #include <dt-bindings/soc/qcom,memshare.h>
>>>> +
>>>> + reserved-memory {
>>>> +
>>>> + #address-cells = <2>;
>>>> + #size-cells = <2>;
>>>> +
>>>> + gps_mem: gps@...00000 {
>>>> + reg = <0x0 0x93c00000 0x0 0x200000>;
>>>> + no-map;
>>> We support 'compatible' in reserved-memory nodes, can you simplify the
>>> binding and put everything in here?
>> If I understand this correctly, each reserved-memory node will
>> then load a new instance of memshare. Since the driver registers a
>> QMI service that handles multiple clients, there should be only one
>> instance.
> This you could work around in the driver implementation, to refcount a
> single implementation shared among all the instances.
>
>> Additionally, as I mentioned earlier, some clients may not
>> need reserved-memory at all
>>
> This on the other hand, makes me feel like we shouldn't go that route.
>
> Regards,
> Bjorn
>
>>>> + };
>>>> + };
>>>> +
>>>> + memshare {
>>>> + compatible = "qcom,memshare";
>>>> + qcom,legacy-client = <&memshare_gps>;
>>>> +
>>>> + #address-cells = <1>;
>>>> + #size-cells = <0>;
>>>> +
>>>> + mpss@0 {
>>>> + reg = <MEMSHARE_PROC_MPSS_V01>;
>>>> + qcom,qrtr-node = <0>;
>>>> +
>>>> + #address-cells = <1>;
>>>> + #size-cells = <0>;
>>>> +
>>>> + memshare_gps: gps@0 {
>>>> + reg = <0>;
>>>> + memory-region = <&gps_mem>;
>>>> + };
>>>> + };
>>>> + };
>>>> +
>>>> +...
>>>> diff --git a/include/dt-bindings/soc/qcom,memshare.h b/include/dt-bindings/soc/qcom,memshare.h
>>>> new file mode 100644
>>>> index 000000000000..4cef1ef75d09
>>>> --- /dev/null
>>>> +++ b/include/dt-bindings/soc/qcom,memshare.h
>>>> @@ -0,0 +1,10 @@
>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>> +
>>>> +#ifndef __DT_QCOM_MEMSHARE_H__
>>>> +#define __DT_QCOM_MEMSHARE_H__
>>>> +
>>>> +#define MEMSHARE_PROC_MPSS_V01 0
>>>> +#define MEMSHARE_PROC_ADSP_V01 1
>>>> +#define MEMSHARE_PROC_WCNSS_V01 2
>>>> +
>>>> +#endif /* __DT_QCOM_MEMSHARE_H__ */
>>>> --
>>>> 2.27.0
>>>>
Powered by blists - more mailing lists