lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8c22cb70b7c0acb6769e0c68540ab523@walle.cc>
Date:   Mon, 16 Mar 2020 18:15:48 +0100
From:   Michael Walle <michael@...le.cc>
To:     Vladimir Oltean <olteanv@...il.com>
Cc:     Mark Brown <broonie@...nel.org>, linux-spi@...r.kernel.org,
        lkml <linux-kernel@...r.kernel.org>,
        Shawn Guo <shawnguo@...nel.org>,
        Rob Herring <robh+dt@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        devicetree@...r.kernel.org, Esben Haabendal <eha@...f.com>,
        angelo@...am.it, andrew.smirnov@...il.com,
        "Gustavo A. R. Silva" <gustavo@...eddedor.com>,
        Wei Chen <weic@...dia.com>, Mohamed Hosny <mhosny@...dia.com>,
        peng.ma@....com
Subject: Re: [PATCH v3 06/12] spi: spi-fsl-dspi: Replace interruptible wait
 queue with a simple completion

Am 2020-03-16 17:23, schrieb Vladimir Oltean:
> On Mon, 16 Mar 2020 at 15:25, Michael Walle <michael@...le.cc> wrote:
>> 
>> Am 2020-03-16 14:00, schrieb Vladimir Oltean:
>> > On Mon, 16 Mar 2020 at 14:49, Mark Brown <broonie@...nel.org> wrote:
>> >>
>> >> On Mon, Mar 16, 2020 at 02:29:09PM +0200, Vladimir Oltean wrote:
>> >>
>> >> > Correct, the real problem is that I forgot to add a Fixes: tag for
>> >> > patch 5. I'll do that now.
>> >>
>> >> OK.  The series otherwise looked fine but I'll wait for testing.
>> >> Michael, if there's issues remaining it might be good to get some
>> >> Tested-bys for the patches prior to whatever's broken so we can get
>> >> those fixes in (but obviously verifying that is work so only if you
>> >> have time).
>> 
>> I'm just about to test it. While my former "cat /dev/mtdN > /dev/null"
>> is working. I had the impression that it was slower, so I tried to 
>> test
>> it with dd now and a known chunk size.. only to find out that it is
>> still not working:
>> 
>> # dmesg|grep spi
>> [    1.894891] spi-nor spi1.0: w25q128fw (16384 Kbytes)
>> ..
>> # time cat /dev/mtd0 > /dev/null
>> real    0m 30.73s
>> user    0m 0.00s
>> sys     0m 1.02s
>> # dd if=/dev/mtd0 of=/dev/null bs=64
>> 262144+0 records in
>> 262144+0 records out
>> # dd if=/dev/mtd0 of=/dev/null bs=64
>> 262144+0 records in
>> 262144+0 records out
>> # dd if=/dev/mtd0 of=/dev/null bs=64
>> dd: /dev/mtd0: Input/output error
> 
> I don't really have a SPI flash connected to DSPI on any LS1028A board.

I'm already debugging it again.

> Is this DMA or XSPI mode?

XSPI mode. DMA mode looked good for now.

>> 
>> I also wanted to test how it behaves if there are multiple processes
>> access the /dev/mtdN device. I haven't found the time to dig into
>> the call chain if see if there is any locking. Because what happens
>> if transfer_one_message() is called twice at the same time from two
>> different processes?
>> 
> 
> There is a mutex on the SPI bus, and therefore all variants of the
> .transfer() call are operating under this lock protection, which
> simplifies things quite a bit.

Ok, thanks.

-michael

>> >
>> > This time I verified with a protocol analyzer all transfer lengths
>> > from 1 all the way to 256, with this script:
>> >
>> > #!/bin/bash
>> >
>> > buf=''
>> >
>> > for i in $(seq 0 255); do
>> > »       buf="${buf}\x$(printf '%02x' ${i})"
>> > »       spidev_test --device /dev/spidev2.0 --bpw 8 --cpha --speed
>> > 5000000 -p "${buf}"
>> > done
>> >
>> > It looked fine as far as I could tell, and also the problems
>> > surrounding Ctrl-C are no longer present. Nonetheless it would be good
>> > if Michael could confirm, but I know that he's very busy too so it's
>> > understandable if he can no longer spend time on this.
>> 
>> I'm working on it ;)
>> 
>> -michael
> 
> Thanks,
> -Vladimir

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ