lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Jan 2019 21:58:55 +0100
From:   Martin Sperl <kernel@...tin.sperl.org>
To:     Mark Brown <broonie@...nel.org>
Cc:     Jon Hunter <jonathanh@...dia.com>,
        linux-tegra <linux-tegra@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-spi@...r.kernel.org
Subject: Re: Regression: spi: core: avoid waking pump thread from spi_sync instead run teardown delayed


> On 15.01.2019, at 20:26, Mark Brown <broonie@...nel.org> wrote:
> 
>> On Tue, Jan 15, 2019 at 06:39:27PM +0100, kernel@...tin.sperl.org wrote:
>> 
>> Is it possible that the specific flash is not using the “normal” 
>> spi_pump_message, but spi_controller_mem_ops operations? 
> 
> Right, that's my best guess at the minute as well.
> 
>> Maybe we are missing the teardown in that driver or something is
>> changing flags there.
> 
>> grepping a bit:
> 
>> I see spi_mem_access_start calling spi_flush_queue, which is calling
>> pump_message, which - if there is no transfer - will trigger a delayed
>> tear-down, while it maybe should not be doing that.
> 
> If nothing else it's inefficient.
> 
>> Maybe spi_mem_access_end needs a call to spi_flush_queue as well?
> 
> Hrm, or needs to schedule the queue at any rate (though this will only
> have an impact in the fairly unusual case where there's something
> sharing the bus with a flash).
> 
>> Unfortunately this is theoretical work and quite a lot of guesswork
>> without an actual device available for testing...
> 
> Indeed.

Maybe a bigger change to the reduce the complexity of
the state machine would solve that problem and also
reduce code complexity... 

I may find some time over the weekend if no solution
has been found until then.

The way I would envision it it would have a “state”
as a level (0=shutdown, 1=hw enabled, 2=in pump, 
3=in transfer, 4=in hw-mode,...) and a complete
to allow waking the shutdown thread (and by this
avoiding the busy wait loop we have now).
This would replace those idling, busy, and running flags.

Drawback: it is invasive, but let us see what it
really looks like...

Martin


Powered by blists - more mailing lists