[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8b4dc94a-0d59-499f-8f28-d503e91f2b27@lunn.ch>
Date: Wed, 12 Jun 2024 16:59:13 +0200
From: Andrew Lunn <andrew@...n.ch>
To: Yojana Mallik <y-mallik@...com>
Cc: schnelle@...ux.ibm.com, wsa+renesas@...g-engineering.com,
diogo.ivo@...mens.com, rdunlap@...radead.org, horms@...nel.org,
vigneshr@...com, rogerq@...com, danishanwar@...com,
pabeni@...hat.com, kuba@...nel.org, edumazet@...gle.com,
davem@...emloft.net, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, srk@...com, rogerq@...nel.org,
Siddharth Vadapalli <s-vadapalli@...com>
Subject: Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg
driver as network device
> The shared memory address space in AM64x board is 2G and u32 data type for
> address works to use this address space. In order to make the driver generic,to
> work with systems that have more than 4G address space, we can change the base
> addr data type to u64 in the virtual driver code and the corresponding
> necessary changes have to be made in the firmware.
You probably need to think about this concept in a more generic
way. You have a block of memory which is physically shared between two
CPUs. Does each have its own MMU involved in accesses this memory? Why
would each see the memory at the same physical address? Why does one
CPU actually know anything about the memory layout of another CPU, and
can tell it how to use its own memory? Do not think about your AM64x
board when answering these questions. Think about an abstract system,
two CPUs with a block of shared memory. Maybe it is even a CPU and a
GPU with shared memory, etc.
> The shared memory layout is modeled as circular buffer.
> /* Shared Memory Layout
> *
> * --------------------------- *****************
> * | MAGIC_NUM | icve_shm_head
> * | HEAD |
> * --------------------------- *****************
> * | MAGIC_NUM |
> * | PKT_1_LEN |
> * | PKT_1 |
> * ---------------------------
> * | MAGIC_NUM |
> * | PKT_2_LEN | icve_shm_buf
> * | PKT_2 |
> * ---------------------------
> * | . |
> * | . |
> * ---------------------------
> * | MAGIC_NUM |
> * | PKT_N_LEN |
> * | PKT_N |
> * --------------------------- ****************
> * | MAGIC_NUM | icve_shm_tail
> * | TAIL |
> * --------------------------- ****************
> */
>
> Linux retrieves the following info provided in response by R5 core:
>
> Tx buffer head address which is stored in port->tx_buffer->head
>
> Tx buffer buffer's base address which is stored in port->tx_buffer->buf->base_addr
>
> Tx buffer tail address which is stored in port->tx_buffer->tail
>
> The number of packets that can be put into Tx buffer which is stored in
> port->icve_tx_max_buffers
>
> Rx buffer head address which is stored in port->rx_buffer->head
>
> Rx buffer buffer's base address which is stored in port->rx_buffer->buf->base_addr
>
> Rx buffer tail address which is stored in port->rx_buffer->tail
>
> The number of packets that are put into Rx buffer which is stored in
> port->icve_rx_max_buffers
I think most of these should not be pointers, but offsets from the
base of the shared memory. It then does not matter if they are mapped
at different physical addresses on each CPU.
> Linux trusts these addresses sent by the R5 core to send or receive ethernet
> packets. By this way both the CPUs map to the same physical address.
I'm not sure Linux should trust the R5. For a generic implementation,
the trust should be held to a minimum. There needs to be an agreement
about how the shared memory is partitioned, but each end needs to
verify that the memory is in fact valid, that none of the data
structures point outside of the shared memory etc. Otherwise one
system can cause memory corruption on the other, and that sort of bug
is going to be very hard to debug.
Andrew
Powered by blists - more mailing lists