[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e9b63796-4af2-452c-53de-aab2e7c85866@linux.intel.com>
Date: Wed, 30 Oct 2019 10:18:34 -0500
From: Pierre-Louis Bossart <pierre-louis.bossart@...ux.intel.com>
To: Srinivas Kandagatla <srinivas.kandagatla@...aro.org>,
Vinod Koul <vkoul@...nel.org>
Cc: robh@...nel.org, alsa-devel@...a-project.org,
bgoswami@...eaurora.org, devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org, spapothi@...eaurora.org,
lgirdwood@...il.com, broonie@...nel.org
Subject: Re: [alsa-devel] [PATCH v3 2/2] soundwire: qcom: add support for
SoundWire controller
On 10/30/19 9:56 AM, Srinivas Kandagatla wrote:
>
>
> On 21/10/2019 05:44, Vinod Koul wrote:
>> On 11-10-19, 16:44, Srinivas Kandagatla wrote:
>>
>>> +static irqreturn_t qcom_swrm_irq_handler(int irq, void *dev_id)
>>> +{
>>> + struct qcom_swrm_ctrl *ctrl = dev_id;
>>> + u32 sts, value;
>>> + unsigned long flags;
>>> +
>>> + ctrl->reg_read(ctrl, SWRM_INTERRUPT_STATUS, &sts);
>>> +
>>> + if (sts & SWRM_INTERRUPT_STATUS_CMD_ERROR) {
>>> + ctrl->reg_read(ctrl, SWRM_CMD_FIFO_STATUS, &value);
>>> + dev_err_ratelimited(ctrl->dev,
>>> + "CMD error, fifo status 0x%x\n",
>>> + value);
>>> + ctrl->reg_write(ctrl, SWRM_CMD_FIFO_CMD, 0x1);
>>> + }
>>> +
>>> + if ((sts & SWRM_INTERRUPT_STATUS_NEW_SLAVE_ATTACHED) ||
>>> + sts & SWRM_INTERRUPT_STATUS_CHANGE_ENUM_SLAVE_STATUS)
>>> + schedule_work(&ctrl->slave_work);
>>
>> we are in irq thread, so why not do the work here rather than schedule
>> it?
>
> The reason is that, sdw_handle_slave_status() we will read device id
> registers, which are fifo based in this controller and triggers an
> interrupt for each read.
> So all the such reads will timeout waiting for interrupt if we do not do
> it in a separate thread.
Yes, it's similar for Intel. we don't read device ID in the handler or
reads would time out. And in the latest patches we also use a work queue
for the slave status handling (due to MSI handling issues).
Even if this timeout problem did not exists, updates to the slave status
will typically result in additional read/writes, which are going to be
throttled by the command bandwidth (frame rate), so this status update
should really not be done in a handler. This has to be done in a thread
or work queue.
>
>
>
>>
>>> +static int qcom_swrm_compute_params(struct sdw_bus *bus)
>>> +{
>>> + struct qcom_swrm_ctrl *ctrl = to_qcom_sdw(bus);
>>> + struct sdw_master_runtime *m_rt;
>>> + struct sdw_slave_runtime *s_rt;
>>> + struct sdw_port_runtime *p_rt;
>>> + struct qcom_swrm_port_config *pcfg;
>>> + int i = 0;
>>> +
>>> + list_for_each_entry(m_rt, &bus->m_rt_list, bus_node) {
>>> + list_for_each_entry(p_rt, &m_rt->port_list, port_node) {
>>> + pcfg = &ctrl->pconfig[p_rt->num - 1];
>>> + p_rt->transport_params.port_num = p_rt->num;
>>> + p_rt->transport_params.sample_interval = pcfg->si + 1;
>>> + p_rt->transport_params.offset1 = pcfg->off1;
>>> + p_rt->transport_params.offset2 = pcfg->off2;
>>> + }
>>> +
>>> + list_for_each_entry(s_rt, &m_rt->slave_rt_list, m_rt_node) {
>>> + list_for_each_entry(p_rt, &s_rt->port_list, port_node) {
>>> + pcfg = &ctrl->pconfig[i];
>>> + p_rt->transport_params.port_num = p_rt->num;
>>> + p_rt->transport_params.sample_interval =
>>> + pcfg->si + 1;
>>> + p_rt->transport_params.offset1 = pcfg->off1;
>>> + p_rt->transport_params.offset2 = pcfg->off2;
>>> + i++;
>>> + }
>>
>> Can you explain this one, am not sure I understood this. This fn is
>> supposed to compute and fill up the params, all I can see is filling up!
>>
> Bandwidth parameters are currently coming from board specific Device
> Tree, which are programmed here.
'compute' does not mean 'dynamic on-demand bandwidth allocation', it's
perfectly legal to use fixed tables as done here.
>
>>> +static const struct snd_soc_dai_ops qcom_swrm_pdm_dai_ops = {
>>> + .hw_params = qcom_swrm_hw_params,
>>> + .prepare = qcom_swrm_prepare,
>>> + .hw_free = qcom_swrm_hw_free,
>>> + .startup = qcom_swrm_startup,
>>> + .shutdown = qcom_swrm_shutdown,
>>> + .set_sdw_stream = qcom_swrm_set_sdw_stream,
>>
>> why does indent look off to me!
>>
> Yep, Fixed in next version.
>
> --srini
> _______________________________________________
> Alsa-devel mailing list
> Alsa-devel@...a-project.org
> https://mailman.alsa-project.org/mailman/listinfo/alsa-devel
Powered by blists - more mailing lists