[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20230412165042.jsqhwbn3r364iinr@ripper>
Date: Wed, 12 Apr 2023 09:50:42 -0700
From: Bjorn Andersson <andersson@...nel.org>
To: Chris Lew <quic_clew@...cinc.com>
Cc: Bjorn Andersson <quic_bjorande@...cinc.com>,
Mathieu Poirier <mathieu.poirier@...aro.org>,
linux-arm-msm@...r.kernel.org, linux-remoteproc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] rpmsg: glink: Consolidate TX_DATA and TX_DATA_CONT
On Fri, Apr 07, 2023 at 03:10:45PM -0700, Chris Lew wrote:
>
>
> On 3/27/2023 7:41 AM, Bjorn Andersson wrote:
> > Rather than duplicating most of the code for constructing the initial
> > TX_DATA and subsequent TX_DATA_CONT packets, roll them into a single
> > loop.
> >
> > Signed-off-by: Bjorn Andersson <quic_bjorande@...cinc.com>
> > ---
> > drivers/rpmsg/qcom_glink_native.c | 46 +++++++++----------------------
> > 1 file changed, 13 insertions(+), 33 deletions(-)
> >
> > diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
> > index 62634d020d13..082cf7f4888e 100644
> > --- a/drivers/rpmsg/qcom_glink_native.c
> > +++ b/drivers/rpmsg/qcom_glink_native.c
> > @@ -1309,7 +1309,7 @@ static int __qcom_glink_send(struct glink_channel *channel,
> > int ret;
> > unsigned long flags;
> > int chunk_size = len;
> > - int left_size = 0;
> > + size_t offset = 0;
> > if (!glink->intentless) {
> > while (!intent) {
> > @@ -1343,49 +1343,29 @@ static int __qcom_glink_send(struct glink_channel *channel,
> > iid = intent->id;
> > }
> > - if (wait && chunk_size > SZ_8K) {
> > - chunk_size = SZ_8K;
> > - left_size = len - chunk_size;
> > - }
> > - req.msg.cmd = cpu_to_le16(GLINK_CMD_TX_DATA);
> > - req.msg.param1 = cpu_to_le16(channel->lcid);
> > - req.msg.param2 = cpu_to_le32(iid);
> > - req.chunk_size = cpu_to_le32(chunk_size);
> > - req.left_size = cpu_to_le32(left_size);
> > -
> > - ret = qcom_glink_tx(glink, &req, sizeof(req), data, chunk_size, wait);
> > -
> > - /* Mark intent available if we failed */
> > - if (ret) {
> > - if (intent)
> > - intent->in_use = false;
> > - return ret;
> > - }
> > -
> > - while (left_size > 0) {
> > - data = (void *)((char *)data + chunk_size);
> > - chunk_size = left_size;
> > - if (chunk_size > SZ_8K)
> > + while (offset < len) {
> > + chunk_size = len - offset;
> > + if (chunk_size > SZ_8K && (wait || offset > 0))
>
> offset > 0 seems to be a new condition compared to the previous logic.
> Are we adding this as a cached check because we know if offset is set then
> fragmented sends are allowed?
>
You're right, I believe my intention was to retain the two inquiries of
the original code; for the first block, don't split it if we're not
waiting and for any subsequent blocks always split.
> I don't think wait would have changed during the loop, so I'm not sure if
> offset > 0 is adding any extra value to the check.
>
But you're totally right, offset > 0 would only occur if wait is set and
wait will not have changed for subsequent blocks.
So while capturing the original conditions, it seems superfluous.
Thanks,
Bjorn
> > chunk_size = SZ_8K;
> > - left_size -= chunk_size;
> > - req.msg.cmd = cpu_to_le16(GLINK_CMD_TX_DATA_CONT);
> > + req.msg.cmd = cpu_to_le16(offset == 0 ? GLINK_CMD_TX_DATA : GLINK_CMD_TX_DATA_CONT);
> > req.msg.param1 = cpu_to_le16(channel->lcid);
> > req.msg.param2 = cpu_to_le32(iid);
> > req.chunk_size = cpu_to_le32(chunk_size);
> > - req.left_size = cpu_to_le32(left_size);
> > + req.left_size = cpu_to_le32(len - offset - chunk_size);
> > - ret = qcom_glink_tx(glink, &req, sizeof(req), data,
> > - chunk_size, wait);
> > -
> > - /* Mark intent available if we failed */
> > + ret = qcom_glink_tx(glink, &req, sizeof(req), data + offset, chunk_size, wait);
> > if (ret) {
> > + /* Mark intent available if we failed */
> > if (intent)
> > intent->in_use = false;
> > - break;
> > + return ret;
> > }
> > +
> > + offset += chunk_size;
> > }
> > - return ret;
> > +
> > + return 0;
> > }
> > static int qcom_glink_send(struct rpmsg_endpoint *ept, void *data, int len)
Powered by blists - more mailing lists