lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <99c0a747-aa76-95ff-ad03-723ff092b85e@opensource.cirrus.com>
Date:   Fri, 26 Aug 2022 11:38:41 +0100
From:   Richard Fitzgerald <rf@...nsource.cirrus.com>
To:     Pierre-Louis Bossart <pierre-louis.bossart@...ux.intel.com>,
        <vkoul@...nel.org>, <yung-chuan.liao@...ux.intel.com>,
        <sanyog.r.kale@...el.com>
CC:     <patches@...nsource.cirrus.com>, <alsa-devel@...a-project.org>,
        <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/3] soundwire: bus: Fix lost UNATTACH when re-enumerating

On 25/08/2022 16:25, Richard Fitzgerald wrote:
> On 25/08/2022 15:24, Pierre-Louis Bossart wrote:
>> Humm, I am struggling a bit more on this patch.
>>
>> On 8/25/22 14:22, Richard Fitzgerald wrote:
>>> Rearrange sdw_handle_slave_status() so that any peripherals
>>> on device #0 that are given a device ID are reported as
>>> unattached. The ensures that UNATTACH status is not lost.
>>>
>>> Handle unenumerated devices first and update the
>>> sdw_slave_status array to indicate IDs that must have become
>>> UNATTACHED.
>>>
>>> Look for UNATTACHED devices after this so we can pick up
>>> peripherals that were UNATTACHED in the original PING status
>>> and those that were still ATTACHED at the time of the PING but
>>> then reverted to unenumerated and were found by
>>> sdw_program_device_num().
>>
>> Are those two cases really lost completely? It's a bit surprising, I do
>> recall that we added a recheck on the status, see the 'update_status'
>> label in cdns_update_slave_status_work
>>
> 
> Yes they are. We see this happen extremely frequently (like, almost
> every time) when we reset out peripherals after a firmware change.
> 
> I saw that "try again" stuff in cdns_update_slave_status_work() but
> it's not fixing the problem. Maybe because it's looking for devices
> still on #0 but that isn't the problem.
> 
> The cdns_update_slave_status_work() is running in one workqueue thread,
> child drivers in other threads. So for example:
> 
> 1. Child driver #1 resets #1
> 2. PING: #1 has reverted to #0, #2 still ATTACHED
> 3. cdns_update_slave_status() snapshots the status. #2 is ATTACHED
> 4. #1 has gone so mark it UNATTACHED
> 5. Child driver #2 gets some CPU time and reset #2
> 5. PING: #2 has reset, both now on #0 but we are handling the previous
> PING
> 6. sdw_handle_slave_status() - snapshot PING (from step 3) says #2 is
> attached
> 7. Device on #0 so call sdw_program_device_num()
> 8. sdw_program_device_num() loops until no devices on #0, #1 and #2
> are both reprogrammed, return from sdw_handle_slave_status()
> 10. PING: #1 and #2 both attached
> 11. cdns_update_slave_status() -> sdw_handle_slave_status()
> 12. #1 has changed UNATTACHED->ATTACHED, but we never got a PING with
>      #2 unattached so its slave->status==ATTACHED, "it hasn't changed"
>      (wrong!)
> 
> Now, at step 10 the Cadence IP may have accumlated both UNATTACH and
> ATTACH flags, and perhaps it should be smarter about deciding what
> to report if there are multiple states. HOWEVER.... that's the behaviour
> of Cadence IP, other IP may be different so it's probably unwise to
> assume that the IP has "remembered" the UNATTACH state before it was 
> reprogrammed.
> 

After I wrote that I remembered why I rejected that solution. We don't
know what order multiple events happened, so it's not valid to report
a backlogged UNATTACH just becuse it's more "important". It's not
necessarily accurate.

I would worry about this:

Real-world order:

PING: UNATTACH
See device on #0 and program new device ID
PING: ATTACHED

But because of the delay in handling PINGs the software sees:

See device on #0 and program new device ID
PING: UNATTACH
PING: ATTACHED

Giving a false UANATTACH. We know it's unattached if we found it on #0
so setting its state to UNATTACHED ensures our state is accurate.

>> The idea of detecting first devices that become unattached - and later
>> deal with device0 when they re-attach - was based on the fact that
>> synchronization takes time. The absolute minimum is 16 frames per the
>> SoundWire spec.
>>

My expectation was it was to ensure that the slave->dev was marked
UNATTACHED before trying to re-enumerate it. Either way I think it's not
taking into account that we don't know when the workqueue function will
run or how long it will take. There's two chained workqueue functions to
get to the point of handling a PING. So we can't be sure we'll handle a
PING with the device unattaching before we see it on #0.

>> I don't see how testing for the status[0] first in
>> sdw_handle_slave_status() helps, the value is taken at the same time as
>> status[1..11]. If you really want to take the last information, we
>> should re-read the status from a new PING frame.
>>
>>
> 
> The point is to deal with unattached devices second, not first.
> If we do it first we might find some more that are unattached since
> the ping. Moving the unattach check second means we don't have to
> do it twice.
> 

To clarify: the point was that if we check for unattaches first, when
sdw_program_device_num() updates other slaves to UNATTACHED, we would
then have to run the UNATTACHED loop again to deal with those. If we
check for UNATTACHED second, it can pick up all new UNATTACHED in the
one loop. There's no point checking for UNATTACH first since we can't
rely on the old PING showing the unattach before we see that device on
#0.

There is another possible implementation that we only reprogram a device
on #0 if the slave->state == UNATTACHED. I didn't really like that
partly because we're leaving devices on #0 instead of enumerating them,
but also because I worried that it might carry a risk of race
conditions. But if you prefer that option I can try it.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ