[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <01e03c55-1fcf-1e33-78e8-398a50b622ce@linux.intel.com>
Date: Fri, 26 Aug 2022 10:06:46 +0200
From: Pierre-Louis Bossart <pierre-louis.bossart@...ux.intel.com>
To: Richard Fitzgerald <rf@...nsource.cirrus.com>, vkoul@...nel.org,
yung-chuan.liao@...ux.intel.com, sanyog.r.kale@...el.com
Cc: patches@...nsource.cirrus.com, alsa-devel@...a-project.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] soundwire: bus: Fix lost UNATTACH when re-enumerating
>> On 8/25/22 14:22, Richard Fitzgerald wrote:
>>> Rearrange sdw_handle_slave_status() so that any peripherals
>>> on device #0 that are given a device ID are reported as
>>> unattached. The ensures that UNATTACH status is not lost.
>>>
>>> Handle unenumerated devices first and update the
>>> sdw_slave_status array to indicate IDs that must have become
>>> UNATTACHED.
>>>
>>> Look for UNATTACHED devices after this so we can pick up
>>> peripherals that were UNATTACHED in the original PING status
>>> and those that were still ATTACHED at the time of the PING but
>>> then reverted to unenumerated and were found by
>>> sdw_program_device_num().
>>
>> Are those two cases really lost completely? It's a bit surprising, I do
>> recall that we added a recheck on the status, see the 'update_status'
>> label in cdns_update_slave_status_work
>>
>
> Yes they are. We see this happen extremely frequently (like, almost
> every time) when we reset out peripherals after a firmware change.
>
> I saw that "try again" stuff in cdns_update_slave_status_work() but
> it's not fixing the problem. Maybe because it's looking for devices
> still on #0 but that isn't the problem.
>
> The cdns_update_slave_status_work() is running in one workqueue thread,
> child drivers in other threads. So for example:
>
> 1. Child driver #1 resets #1
> 2. PING: #1 has reverted to #0, #2 still ATTACHED
> 3. cdns_update_slave_status() snapshots the status. #2 is ATTACHED
> 4. #1 has gone so mark it UNATTACHED
> 5. Child driver #2 gets some CPU time and reset #2
> 5. PING: #2 has reset, both now on #0 but we are handling the previous
> PING
> 6. sdw_handle_slave_status() - snapshot PING (from step 3) says #2 is
> attached
> 7. Device on #0 so call sdw_program_device_num()
> 8. sdw_program_device_num() loops until no devices on #0, #1 and #2
> are both reprogrammed, return from sdw_handle_slave_status()
> 10. PING: #1 and #2 both attached
> 11. cdns_update_slave_status() -> sdw_handle_slave_status()
> 12. #1 has changed UNATTACHED->ATTACHED, but we never got a PING with
> #2 unattached so its slave->status==ATTACHED, "it hasn't changed"
> (wrong!)
>
> Now, at step 10 the Cadence IP may have accumlated both UNATTACH and
> ATTACH flags, and perhaps it should be smarter about deciding what
> to report if there are multiple states. HOWEVER.... that's the behaviour
> of Cadence IP, other IP may be different so it's probably unwise to
> assume that the IP has "remembered" the UNATTACH state before it was
> reprogrammed.
>
> If we reprogrammed it, it was definitely UNATTACHED so let's say that.
Thanks for the detailed answer, this sequence of events will certainly
defeat the Cadence IP and the way sticky bits were handled.
The UNATTACHED case was assumed to be a really rare case of losing sync,
i.e. a SOFT_RESET in SoundWire parlance.
If you explicitly do a device reset, that would be a new scenario that
was not considered before on any of the existing SoundWire commercial
devices. It's however something we need to support, and your work here
is much appreciated.
I still think we should re-check the actual status from a PING frame, in
order to work with more current data than the sticky bits taken at an
earlier time, but that would only be a minor improvement.
I also have a vague feeling that additional work is needed to make sure
the DAIs are not used before that second enumeration and all firmware
download complete. I did a couple of tests last year where I used the
debugfs interface to issue a device reset command while streaming audio,
and the detach/reattach was not handled at the ASoC level.
I really don't see any logical flaws in your patch as is, so
Reviewed-by: Pierre-Louis Bossart <pierre-louis.bossart@...ux.intel.com>
Powered by blists - more mailing lists