[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <319616d8-533e-48c6-b97e-6285d284ac9e@ti.com>
Date: Mon, 3 Jun 2024 14:26:06 +0530
From: Yojana Mallik <y-mallik@...com>
To: Andrew Lunn <andrew@...n.ch>
CC: <schnelle@...ux.ibm.com>, <wsa+renesas@...g-engineering.com>,
<diogo.ivo@...mens.com>, <rdunlap@...radead.org>, <horms@...nel.org>,
<vigneshr@...com>, <rogerq@...com>, <danishanwar@...com>,
<pabeni@...hat.com>, <kuba@...nel.org>, <edumazet@...gle.com>,
<davem@...emloft.net>, <netdev@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <srk@...com>, <rogerq@...nel.org>,
<s-vadapalli@...com>
Subject: Re: [PATCH net-next v2 1/3] net: ethernet: ti: RPMsg based shared
memory ethernet driver
Hi Andrew,
On 6/2/24 21:51, Andrew Lunn wrote:
>> +struct request_message {
>> + u32 type; /* Request Type */
>> + u32 id; /* Request ID */
>> +} __packed;
>> +
>> +struct response_message {
>> + u32 type; /* Response Type */
>> + u32 id; /* Response ID */
>> +} __packed;
>> +
>> +struct notify_message {
>> + u32 type; /* Notify Type */
>> + u32 id; /* Notify ID */
>> +} __packed;
>
> These are basically identical.
>
The first patch introduces only the RPMsg-based driver.
The RPMsg driver is registered as a network device in the second patch.
struct icve_mac_addr mac_addr is added as a member to
struct request_message in the second patch. Similarly struct icve_shm shm_info
is added as a member to struct response_message in the second patch. From
second patch onward struct request_message and struct response_message are not
identical. These members are used for the network device driver. As this patch
introduces only RPMsg-based ethernet driver these members were not used in this
patch and hence not mentioned in this patch. I understand this has led to the
confusion of the structures looking similar in this patch. Kindly suggest if I
should add these members in this patch itself instead of introducing them in
the next patch.
> The packed should not be needed, since these structures are naturally
> aligned. The compiler will do the right thing without the
> __packet. And there is a general dislike for __packed. It is better to
> layout your structures correctly so they are not needed.
>
>> +struct message_header {
>> + u32 src_id;
>> + u32 msg_type; /* Do not use enum type, as enum size is compiler dependent */
>> +} __packed;
>> +
>> +struct message {
>> + struct message_header msg_hdr;
>> + union {
>> + struct request_message req_msg;
>> + struct response_message resp_msg;
>> + struct notify_message notify_msg;
>> + };
>
> Since they are identical, why bother with a union? It could be argued
> it allows future extensions, but i don't see any sort of protocol
> version here so you can tell if extra fields have been added.
>
struct icve_mac_addr mac_addr is added as a member to
struct request_message in the second patch. Similarly struct icve_shm shm_info
is added as a member to struct response_message in the second patch. So sizes
of the structures are different. Hence union is used. I had moved those newly
added members to second patch because they are not used in the first patch. I
understand this has led to the confusion of the structures looking identical in
this patch. If you suggest I will move the newly added members of the
structures from the second patch to the first patch and then the structures
will not look identical in this patch.
>> +static int icve_rpmsg_cb(struct rpmsg_device *rpdev, void *data, int len,
>> + void *priv, u32 src)
>> +{
>> + struct icve_common *common = dev_get_drvdata(&rpdev->dev);
>> + struct message *msg = (struct message *)data;
>> + u32 msg_type = msg->msg_hdr.msg_type;
>> + u32 rpmsg_type;
>> +
>> + switch (msg_type) {
>> + case ICVE_REQUEST_MSG:
>> + rpmsg_type = msg->req_msg.type;
>> + dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
>> + msg_type, rpmsg_type);
>> + break;
>> + case ICVE_RESPONSE_MSG:
>> + rpmsg_type = msg->resp_msg.type;
>> + dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
>> + msg_type, rpmsg_type);
>> + break;
>> + case ICVE_NOTIFY_MSG:
>> + rpmsg_type = msg->notify_msg.type;
>> + dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
>> + msg_type, rpmsg_type);
>
> This can be flattened to:
>
>> + case ICVE_REQUEST_MSG:
>> + case ICVE_RESPONSE_MSG:
>> + case ICVE_NOTIFY_MSG:
>> + rpmsg_type = msg->notify_msg.type;
>> + dev_dbg(common->dev, "Msg type = %d; RPMsg type = %d\n",
>> + msg_type, rpmsg_type);
>
New switch case statements have been added in the second patch for rpmsg_type
under under the case ICVE_RESPONSE_MSG. This makes case ICVE_REQUEST_MSG, case
ICVE_RESPONSE_MSG and case ICVE_NOTIFY_MSG different in the second patch. I
have kept icve_rpmsg_cb simple in this patch as it is called by the .callback.
Do you suggest to flatten these switch cases only for this patch?
> which makes me wounder about the value of this. Yes, later patches are
> going to flesh this out, but what value is there in printing the
> numerical value of msg_type, when you could easily have the text
> "Request", "Response", and "Notify". And why not include src_id and id
> in this debug output? If you are going to add debug output, please
> make it complete, otherwise it is often not useful.
>
I will modify the debug output by including texts like "Request", "Response",
and "Notify" instead of the numerical value of msg_type. I will include src_id
and id and try to make this debug output complete and meaningful.
>> + break;
>> + default:
>> + dev_err(common->dev, "Invalid msg type\n");
>> + break;
> That is a potential way for the other end to DoS you. It also makes
> changes to the protocol difficult, since you cannot add new messages
> without DoS a machine using the old protocol. It would be better to
> just increment a counter and keep going.
>
I will modify default case to return -EINVAL.
>> +static void icve_rpmsg_remove(struct rpmsg_device *rpdev)
>> +{
>> + dev_info(&rpdev->dev, "icve rpmsg client driver is removed\n");
>
> Please don't spam the logs. dev_dbg(), or nothing at all.
>
> Andrew
I will remove the dev_info from icve_rpmsg_remove.
Thanks for your feedback.
Powered by blists - more mailing lists