[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1ece2f4b-6f2d-452f-b2af-18d0895f9443@microchip.com>
Date: Wed, 18 Sep 2024 15:51:50 +0000
From: <Valentina.FernandezAlanis@...rochip.com>
To: <krzk@...nel.org>, <paul.walmsley@...ive.com>, <palmer@...belt.com>,
<aou@...s.berkeley.edu>, <peterlin@...estech.com>, <dminus@...estech.com>,
<Conor.Dooley@...rochip.com>, <conor+dt@...nel.org>, <ycliang@...estech.com>,
<jassisinghbrar@...il.com>, <robh@...nel.org>, <krzk+dt@...nel.org>,
<andersson@...nel.org>, <mathieu.poirier@...aro.org>
CC: <linux-riscv@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
<devicetree@...r.kernel.org>, <linux-remoteproc@...r.kernel.org>
Subject: Re: [PATCH v1 5/5] remoteproc: add support for Microchip IPC
remoteproc platform driver
On 16/09/2024 21:18, Krzysztof Kozlowski wrote:
> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
>
> On 12/09/2024 19:00, Valentina Fernandez wrote:
>> The Microchip family of RISC-V SoCs typically has one or more clusters.
>> These clusters can be configured to run in Asymmetric Multi-Processing
>> (AMP) mode.
>>
>> Add a remoteproc platform driver to be able to load and boot firmware
>> to the remote processor(s).
>
> ...
>
>> +
>> +static int mchp_ipc_rproc_prepare(struct rproc *rproc)
>> +{
>> + struct mchp_ipc_rproc *priv = rproc->priv;
>> + struct device_node *np = priv->dev->of_node;
>> + struct rproc_mem_entry *mem;
>> + struct reserved_mem *rmem;
>> + struct of_phandle_iterator it;
>> + u64 device_address;
>> +
>> + reinit_completion(&priv->start_done);
>> +
>> + of_phandle_iterator_init(&it, np, "memory-region", NULL, 0);
>> + while (of_phandle_iterator_next(&it) == 0) {
>> + /*
>> + * Ignore the first memory region which will be used vdev
>> + * buffer. No need to do extra handlings, rproc_add_virtio_dev
>> + * will handle it.
>> + */
>> + if (!strcmp(it.node->name, "vdev0buffer"))
>
> What? If you ignore the first, then why are you checking names? This
> does not make sense. Especially that your binding did not say anything
> about these phandles being somehow special.
The idea in the code above is to skip the vdev buffer allocation and
carveout registration. Later, when the remoteproc virtio driver
registers the virtio device (rproc_add_virtio_dev), it will attempt to
find the carveout. Since the carveout wasn’t registered, it will use the
first memory region from the parent by calling
of_reserved_mem_device_init_by_idx.
This behavior is based on some existing platform drivers. However, upon
further inspection, it seems that some newer drivers use
rproc_of_resm_mem_entry_init to allocate vdev buffers.
I will restructure this section and rephase/drop the comment.
With regards the bindings, I'll explain better all the memory regions
for v2.
Just for everyone’s information, we have the following use cases:
Early boot: Remote processors are booted by another entity before Linux,
so we only need to attach. For this mode, we require the resource table
as a memory region in the device tree.
Late boot - Linux is responsible for loading the firmware and starting
it on the remote processors. For this, we need the region used for the
firmware image.
In both cases, rpmsg communication is optional. This means that the vdev
buffers and vrings memory regions are also optional.
There could also be a mixed case where we start with early boot mode by
attaching to an existing remoteproc, and then stop, start, and load
another firmware once Linux has booted. In this case, we would require
the resource table and firmware image region, and optionally, vdev
buffers and vrings.
>
>> + continue;
>> +
>> + if (!strcmp(it.node->name, "rsc-table"))
>
> Nope.
Since the resource table is only needed for early boot mode and does not
need to be a carveout region, we are skipping that.
I will work on making the resource table a fixed index in the
memory-region property so that it doesn't have a fixed name.
Thanks,
Valentina
>
>> + continue;
>> +
>> + rmem = of_reserved_mem_lookup(it.node);
>> + if (!rmem) {
>> + of_node_put(it.node);
>> + dev_err(priv->dev, "unable to acquire memory-region\n");
>> + return -EINVAL;
>> + }
>> +
>> + device_address = rmem->base;
>> +
>> + mem = rproc_mem_entry_init(priv->dev, NULL, (dma_addr_t)rmem->base,
>> + rmem->size, device_address, mchp_ipc_rproc_mem_alloc,
>> + mchp_ipc_rproc_mem_release, it.node->name);
>> +
>> + if (!mem) {
>> + of_node_put(it.node);
>> + return -ENOMEM;
>> + }
>> +
>> + rproc_add_carveout(rproc, mem);
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int mchp_ipc_rproc_parse_fw(struct rproc *rproc, const struct firmware *fw)
>> +{
>> + int ret;
>> +
>> + ret = rproc_elf_load_rsc_table(rproc, fw);
>> + if (ret)
>> + dev_info(&rproc->dev, "No resource table in elf\n");
>> +
>> + return 0;
>> +}
>> +
>> +static void mchp_ipc_rproc_kick(struct rproc *rproc, int vqid)
>> +{
>> + struct mchp_ipc_rproc *priv = (struct mchp_ipc_rproc *)rproc->priv;
>> + struct mchp_ipc_msg msg;
>> + int ret;
>> +
>> + msg.buf = (void *)&vqid;
>> + msg.size = sizeof(vqid);
>> +
>> + ret = mbox_send_message(priv->mbox_channel, (void *)&msg);
>> + if (ret < 0)
>> + dev_err(priv->dev,
>> + "failed to send mbox message, status = %d\n", ret);
>> +}
>> +
>> +static int mchp_ipc_rproc_attach(struct rproc *rproc)
>> +{
>> + return 0;
>> +}
>> +
>> +static struct resource_table
>> +*mchp_ipc_rproc_get_loaded_rsc_table(struct rproc *rproc, size_t *table_sz)
>> +{
>> + struct mchp_ipc_rproc *priv = rproc->priv;
>> +
>> + if (!priv->rsc_table)
>> + return NULL;
>> +
>> + *table_sz = SZ_1K;
>> + return (struct resource_table *)priv->rsc_table;
>> +}
>> +
>> +static const struct rproc_ops mchp_ipc_rproc_ops = {
>> + .prepare = mchp_ipc_rproc_prepare,
>> + .start = mchp_ipc_rproc_start,
>> + .get_loaded_rsc_table = mchp_ipc_rproc_get_loaded_rsc_table,
>> + .attach = mchp_ipc_rproc_attach,
>> + .stop = mchp_ipc_rproc_stop,
>> + .kick = mchp_ipc_rproc_kick,
>> + .load = rproc_elf_load_segments,
>> + .parse_fw = mchp_ipc_rproc_parse_fw,
>> + .find_loaded_rsc_table = rproc_elf_find_loaded_rsc_table,
>> + .sanity_check = rproc_elf_sanity_check,
>> + .get_boot_addr = rproc_elf_get_boot_addr,
>> +};
>> +
>> +static int mchp_ipc_rproc_addr_init(struct mchp_ipc_rproc *priv,
>> + struct platform_device *pdev)
>> +{
>> + struct device *dev = &pdev->dev;
>> + struct device_node *np = dev->of_node;
>> + int i, err, rmem_np;
>> +
>> + rmem_np = of_count_phandle_with_args(np, "memory-region", NULL);
>> + if (rmem_np <= 0)
>> + return 0;
>> +
>> + for (i = 0; i < rmem_np; i++) {
>> + struct device_node *node;
>> + struct resource res;
>> +
>> + node = of_parse_phandle(np, "memory-region", i);
>> + if (!node)
>> + continue;
>> +
>> + if (!strncmp(node->name, "vdev", strlen("vdev"))) {
>
> Uh? Why do you look for node names? What if I call the name differently?
> Why would that matter?
>
>> + of_node_put(node);
>> + continue;
>> + }
>> +
>> + if (!strcmp(node->name, "rsc-table")) {
>
> No, really. Why are you checking for these?
>
> NAK
>
>
>> + err = of_address_to_resource(node, 0, &res);
>> + if (err) {
>> + of_node_put(node);
>> + return dev_err_probe(dev, err,
>> + "unable to resolve memory region\n");
>> + }
>> + priv->rsc_table = devm_ioremap(&pdev->dev,
>> + res.start, resource_size(&res));
>> + of_node_put(node);
>> + }
>> + }
>> +
>> + return 0;
>> +}
>
>
> Best regards,
> Krzysztof
>
Powered by blists - more mailing lists