lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 14 Jun 2024 14:38:24 +0530
From: Yojana Mallik <y-mallik@...com>
To: Andrew Lunn <andrew@...n.ch>
CC: <schnelle@...ux.ibm.com>, <wsa+renesas@...g-engineering.com>,
        <diogo.ivo@...mens.com>, <rdunlap@...radead.org>, <horms@...nel.org>,
        <vigneshr@...com>, <rogerq@...com>, <danishanwar@...com>,
        <pabeni@...hat.com>, <kuba@...nel.org>, <edumazet@...gle.com>,
        <davem@...emloft.net>, <netdev@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <srk@...com>, <rogerq@...nel.org>,
        Siddharth
 Vadapalli <s-vadapalli@...com>, <y-mallik@...com>
Subject: Re: [PATCH net-next v2 2/3] net: ethernet: ti: Register the RPMsg
 driver as network device



On 6/12/24 20:29, Andrew Lunn wrote:
>> The shared memory address space in AM64x board is 2G and u32 data type for
>> address works to use this address space. In order to make the driver generic,to
>> work with systems that have more than 4G address space, we can change the base
>> addr data type to u64 in the virtual driver code and the corresponding
>> necessary changes have to be made in the firmware.
> 
> You probably need to think about this concept in a more generic
> way. You have a block of memory which is physically shared between two
> CPUs. Does each have its own MMU involved in accesses this memory? Why
> would each see the memory at the same physical address? Why does one
> CPU actually know anything about the memory layout of another CPU, and
> can tell it how to use its own memory? Do not think about your AM64x
> board when answering these questions. Think about an abstract system,
> two CPUs with a block of shared memory. Maybe it is even a CPU and a
> GPU with shared memory, etc. 
> 
>> The shared memory layout is modeled as circular buffer.
>> /*      Shared Memory Layout
>>  *
>>  *	---------------------------	*****************
>>  *	|        MAGIC_NUM        |	 icve_shm_head
>>  *	|          HEAD           |
>>  *	---------------------------	*****************
>>  *	|        MAGIC_NUM        |
>>  *	|        PKT_1_LEN        |
>>  *	|          PKT_1          |
>>  *	---------------------------
>>  *	|        MAGIC_NUM        |
>>  *	|        PKT_2_LEN        |	 icve_shm_buf
>>  *	|          PKT_2          |
>>  *	---------------------------
>>  *	|           .             |
>>  *	|           .             |
>>  *	---------------------------
>>  *	|        MAGIC_NUM        |
>>  *	|        PKT_N_LEN        |
>>  *	|          PKT_N          |
>>  *	---------------------------	****************
>>  *	|        MAGIC_NUM        |      icve_shm_tail
>>  *	|          TAIL           |
>>  *	---------------------------	****************
>>  */
>>
>> Linux retrieves the following info provided in response by R5 core:
>>
>> Tx buffer head address which is stored in port->tx_buffer->head
>>
>> Tx buffer buffer's base address which is stored in port->tx_buffer->buf->base_addr
>>
>> Tx buffer tail address which is stored in port->tx_buffer->tail
>>
>> The number of packets that can be put into Tx buffer which is stored in
>> port->icve_tx_max_buffers
>>
>> Rx buffer head address which is stored in port->rx_buffer->head
>>
>> Rx buffer buffer's base address which is stored in port->rx_buffer->buf->base_addr
>>
>> Rx buffer tail address which is stored in port->rx_buffer->tail
>>
>> The number of packets that are put into Rx buffer which is stored in
>> port->icve_rx_max_buffers
> 
> I think most of these should not be pointers, but offsets from the
> base of the shared memory. It then does not matter if they are mapped
> at different physical addresses on each CPU.
> 
>> Linux trusts these addresses sent by the R5 core to send or receive ethernet
>> packets. By this way both the CPUs map to the same physical address.
> 
> I'm not sure Linux should trust the R5. For a generic implementation,
> the trust should be held to a minimum. There needs to be an agreement
> about how the shared memory is partitioned, but each end needs to
> verify that the memory is in fact valid, that none of the data
> structures point outside of the shared memory etc. Otherwise one
> system can cause memory corruption on the other, and that sort of bug
> is going to be very hard to debug.
> 
> 	Andrew
> 

The Linux Remoteproc driver which initializes remote processor cores carves out
a section from DDR memory as reserved memory for each remote processor on the
SOC. This memory region has been reserved in the Linux device tree file as
reserved-memory. Out of this reserved memory for R5 core some memory is
reserved for shared memory.

The shared memory is divided into two distinct regions:
one for the A53 -> R5 data path (Tx buffer for Linux), and other for R5 -> A53
data path (Rx buffer for Linux).

Four entities total shared memory size, number of packets, buffer slot size and
base address of buffer has been hardcoded into the firmware code for both the
Tx and Rx buffer. These four entities are informed by the R5 core and Linux
retrieves these info from message received using icve_rpmsg_cb.

Using the Base Address for Tx or Rx shared memory received and the value of
number of packets, buffer slot size received, buffer's head address, shared
memory buffer buffer's base address and tail address is calculated in the driver.

Linux driver uses ioremap function to translate these physical addresses
calculated into virtual address. Linux uses these virtual addresses to send
packets to remote cores using icve_start_xmit function.

It has been agreed upon by design that the remote core will use a particular
start address of buffer and Linux will also use it, and it has been harcoded in
the firmware in the remote core. Since this address has been harcoded in the
firmware, it can be hardcoded in the Linux driver code and then a check can be
made if the address received from remote core matches with the address
hardcoded in the Linux driver.

But viewing the driver from a generic perspective, the driver can interact with
different firmware whose start address for the shared memory region will not
match the one hardcoded into the Linux driver.

This is why it has been decided to hardcode the start address in the firmware
and it will be sent by the remote core to Linux and Linux will use it.

Kindly suggest in what other ways can the driver get to know the start address
of the shared memory if not informed by the remote core. Also how to do a check
if the address is valid.

Thanks and regards,
Yojana Mallik

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ