lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6b36382b-8889-0a10-d276-fe6d5bd1874e@codeaurora.org>
Date:   Wed, 23 Aug 2017 10:14:44 +0530
From:   Arun Kumar Neelakantam <aneela@...eaurora.org>
To:     Sricharan R <sricharan@...eaurora.org>, ohad@...ery.com,
        bjorn.andersson@...aro.org, linux-remoteproc@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-arm-msm@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH 13/18] rpmsg: glink: Add rx done command



On 8/22/2017 7:46 PM, Sricharan R wrote:
> Hi,
>>> +    /* Take it off the tree of receive intents */
>>> +    if (!intent->reuse) {
>>> +        spin_lock(&channel->intent_lock);
>>> +        idr_remove(&channel->liids, intent->id);
>>> +        spin_unlock(&channel->intent_lock);
>>> +    }
>>> +
>>> +    /* Schedule the sending of a rx_done indication */
>>> +    spin_lock(&channel->intent_lock);
>>> +    list_add_tail(&intent->node, &channel->done_intents);
>>> +    spin_unlock(&channel->intent_lock);
>>> +
>>> +    schedule_work(&channel->intent_work);
>> Adding one more parallel path will hit performance, if this worker could not get CPU cycles
>> or blocked by other RT or HIGH_PRIO worker on global worker pool.
>   The idea is, by design to have parallel non-blocking paths for rx and tx (that is done as a
>   part of rx by sending the rx_done command), otherwise trying to send the rx_done
>   command in the rx isr context is a problem since the tx can wait for the FIFO space and
>   in worst case, can even lead to a potential deadlock if both the local and remote try
>   the same. Having said that, instead of queuing this work in to the global queue, this
>   can be put in to a local glink edge owned queue (or) a threaded isr ?, downstream does the
>   rx_done in a client specific worker.

Yes, mixing RX and TX path will cause dead lock. I am okay to use 
specific queue with HIGH_PRIO or a threaded isr.
down stream uses both client specific worker and client RX cb [this mix 
the TX and RX path] which want to avoid.
>
> Regards,
>   Sricharan
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ