[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <67a2b4db-fabb-9787-6813-7bd001814bfc@codeaurora.org>
Date: Tue, 22 Aug 2017 19:46:00 +0530
From: Sricharan R <sricharan@...eaurora.org>
To: Arun Kumar Neelakantam <aneela@...eaurora.org>, ohad@...ery.com,
bjorn.andersson@...aro.org, linux-remoteproc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arm-msm@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH 13/18] rpmsg: glink: Add rx done command
Hi,
>> + /* Take it off the tree of receive intents */
>> + if (!intent->reuse) {
>> + spin_lock(&channel->intent_lock);
>> + idr_remove(&channel->liids, intent->id);
>> + spin_unlock(&channel->intent_lock);
>> + }
>> +
>> + /* Schedule the sending of a rx_done indication */
>> + spin_lock(&channel->intent_lock);
>> + list_add_tail(&intent->node, &channel->done_intents);
>> + spin_unlock(&channel->intent_lock);
>> +
>> + schedule_work(&channel->intent_work);
>
> Adding one more parallel path will hit performance, if this worker could not get CPU cycles
> or blocked by other RT or HIGH_PRIO worker on global worker pool.
The idea is, by design to have parallel non-blocking paths for rx and tx (that is done as a
part of rx by sending the rx_done command), otherwise trying to send the rx_done
command in the rx isr context is a problem since the tx can wait for the FIFO space and
in worst case, can even lead to a potential deadlock if both the local and remote try
the same. Having said that, instead of queuing this work in to the global queue, this
can be put in to a local glink edge owned queue (or) a threaded isr ?, downstream does the
rx_done in a client specific worker.
Regards,
Sricharan
--
"QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
Powered by blists - more mailing lists