[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5effa700-480b-4030-8335-304ebc4444b7@phytec.de>
Date: Fri, 3 Nov 2023 10:00:11 +0100
From: Wadim Egorov <w.egorov@...tec.de>
To: Nishanth Menon <nm@...com>, Garrett Giordano <ggiordano@...tec.com>
CC: <vigneshr@...com>, <kristo@...nel.org>, <robh+dt@...nel.org>,
<krzysztof.kozlowski+dt@...aro.org>, <conor+dt@...nel.org>,
<r-gunasekaran@...com>, <linux-arm-kernel@...ts.infradead.org>,
<devicetree@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<upstream@...ts.phytec.de>
Subject: Re: [PATCH] arm64: dts: ti: phycore-am64: Add R5F DMA Region and
Mailboxes
Hi Nishanth,
Am 03.11.23 um 01:17 schrieb Nishanth Menon:
> On 13:12-20231102, Garrett Giordano wrote:
>> Communication between the R5F subsystem and Linux takes place using DMA
>> memory regions and mailboxes. Here we add DT nodes for the memory
>> regions and mailboxes to facilitate communication between the R5
>> clusters and Linux as remoteproc will fail to start if no memory
>> regions or mailboxes are provided.
>>
>> Fixes: c48ac0efe6d7 ("arm64: dts: ti: Add support for phyBOARD-Electra-AM642")
> is this fixes? Sounds more or less like rproc support is added in?
I would say it is also a fix, as the R5 cores are enabled by default at
the SoC level devicetree and also require mboxes & memory regions to be
configured. The docs mention both as mandatory.
Otherwise, we will encounter errors such as
platform 78000000.r5f: device does not have reserved memory regions,
ret = -22
Regards,
Wadim
>
>> Signed-off-by: Garrett Giordano <ggiordano@...tec.com>
>> ---
>> .../boot/dts/ti/k3-am64-phycore-som.dtsi | 102 +++++++++++++++++-
>> 1 file changed, 101 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/boot/dts/ti/k3-am64-phycore-som.dtsi b/arch/arm64/boot/dts/ti/k3-am64-phycore-som.dtsi
>> index 1c2c8f0daca9..37a33006c1fc 100644
>> --- a/arch/arm64/boot/dts/ti/k3-am64-phycore-som.dtsi
>> +++ b/arch/arm64/boot/dts/ti/k3-am64-phycore-som.dtsi
>> @@ -29,7 +29,7 @@ memory@...00000 {
>> reg = <0x00000000 0x80000000 0x00000000 0x80000000>;
>> };
>>
>> - reserved-memory {
>> + reserved_memory: reserved-memory {
>> #address-cells = <2>;
>> #size-cells = <2>;
>> ranges;
>> @@ -39,6 +39,54 @@ secure_ddr: optee@...00000 {
>> alignment = <0x1000>;
>> no-map;
>> };
>> +
>> + main_r5fss0_core0_dma_memory_region: r5f-dma-memory@...00000 {
>> + compatible = "shared-dma-pool";
>> + reg = <0x00 0xa0000000 0x00 0x100000>;
>> + no-map;
>> + };
>> +
>> + main_r5fss0_core0_memory_region: r5f-memory@...00000 {
>> + compatible = "shared-dma-pool";
>> + reg = <0x00 0xa0100000 0x00 0xf00000>;
>> + no-map;
>> + };
>> +
>> + main_r5fss0_core1_dma_memory_region: r5f-dma-memory@...00000 {
>> + compatible = "shared-dma-pool";
>> + reg = <0x00 0xa1000000 0x00 0x100000>;
>> + no-map;
>> + };
>> +
>> + main_r5fss0_core1_memory_region: r5f-memory@...00000 {
>> + compatible = "shared-dma-pool";
>> + reg = <0x00 0xa1100000 0x00 0xf00000>;
>> + no-map;
>> + };
>> +
>> + main_r5fss1_core0_dma_memory_region: r5f-dma-memory@...00000 {
>> + compatible = "shared-dma-pool";
>> + reg = <0x00 0xa2000000 0x00 0x100000>;
>> + no-map;
>> + };
>> +
>> + main_r5fss1_core0_memory_region: r5f-memory@...00000 {
>> + compatible = "shared-dma-pool";
>> + reg = <0x00 0xa2100000 0x00 0xf00000>;
>> + no-map;
>> + };
>> +
>> + main_r5fss1_core1_dma_memory_region: r5f-dma-memory@...00000 {
>> + compatible = "shared-dma-pool";
>> + reg = <0x00 0xa3000000 0x00 0x100000>;
>> + no-map;
>> + };
>> +
>> + main_r5fss1_core1_memory_region: r5f-memory@...00000 {
>> + compatible = "shared-dma-pool";
>> + reg = <0x00 0xa3100000 0x00 0xf00000>;
>> + no-map;
>> + };
>> };
>>
>> leds {
>> @@ -160,6 +208,34 @@ &cpsw_port2 {
>> status = "disabled";
>> };
>>
>> +&mailbox0_cluster2 {
>> + status = "okay";
>> +
>> + mbox_main_r5fss0_core0: mbox-main-r5fss0-core0 {
>> + ti,mbox-rx = <0 0 2>;
>> + ti,mbox-tx = <1 0 2>;
>> + };
>> +
>> + mbox_main_r5fss0_core1: mbox-main-r5fss0-core1 {
>> + ti,mbox-rx = <2 0 2>;
>> + ti,mbox-tx = <3 0 2>;
>> + };
>> +};
>> +
>> +&mailbox0_cluster4 {
>> + status = "okay";
>> +
>> + mbox_main_r5fss1_core0: mbox-main-r5fss1-core0 {
>> + ti,mbox-rx = <0 0 2>;
>> + ti,mbox-tx = <1 0 2>;
>> + };
>> +
>> + mbox_main_r5fss1_core1: mbox-main-r5fss1-core1 {
>> + ti,mbox-rx = <2 0 2>;
>> + ti,mbox-tx = <3 0 2>;
>> + };
>> +};
>> +
>> &main_i2c0 {
>> status = "okay";
>> pinctrl-names = "default";
>> @@ -180,6 +256,30 @@ i2c_som_rtc: rtc@52 {
>> };
>> };
>>
>> +&main_r5fss0_core0 {
>> + mboxes = <&mailbox0_cluster2 &mbox_main_r5fss0_core0>;
>> + memory-region = <&main_r5fss0_core0_dma_memory_region>,
>> + <&main_r5fss0_core0_memory_region>;
>> +};
>> +
>> +&main_r5fss0_core1 {
>> + mboxes = <&mailbox0_cluster2 &mbox_main_r5fss0_core1>;
>> + memory-region = <&main_r5fss0_core1_dma_memory_region>,
>> + <&main_r5fss0_core1_memory_region>;
>> +};
>> +
>> +&main_r5fss1_core0 {
>> + mboxes = <&mailbox0_cluster4 &mbox_main_r5fss1_core0>;
>> + memory-region = <&main_r5fss1_core0_dma_memory_region>,
>> + <&main_r5fss1_core0_memory_region>;
>> +};
>> +
>> +&main_r5fss1_core1 {
>> + mboxes = <&mailbox0_cluster4 &mbox_main_r5fss1_core1>;
>> + memory-region = <&main_r5fss1_core1_dma_memory_region>,
>> + <&main_r5fss1_core1_memory_region>;
>> +};
>> +
>> &ospi0 {
>> status = "okay";
>> pinctrl-names = "default";
>> --
>> 2.25.1
>>
Powered by blists - more mailing lists