lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bce48dba-c163-4fe7-50c4-984de41488c2@alliedtelesis.co.nz>
Date:   Tue, 7 Dec 2021 22:24:13 +0000
From:   Chris Packham <Chris.Packham@...iedtelesis.co.nz>
To:     "mbizon@...ebox.fr" <mbizon@...ebox.fr>,
        "wsa@...nel.org" <wsa@...nel.org>
CC:     "linux-i2c@...r.kernel.org" <linux-i2c@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] i2c: mpc: Use atomic read and fix break condition


On 7/12/21 11:13 pm, Maxime Bizon wrote:
> On Tue, 2021-12-07 at 17:21 +1300, Chris Packham wrote:
>
>> Can you give this a test on your setup. I've tried it on the setup
>> where I had the original problem that led to 4a8ac5e45cda and it
>> seems OK so far (I'll leave my test running overnight).
> Tested-by: Maxime Bizon <mbizon@...ebox.fr>
My testing overnight also looks good.
> Small reservation though, it does not seem to be understood why this
> polling is needed.
>
> Reading the driver history, the theory is that the controller will
> trigger an interrupt at the end of transfer just after the last SCL
> cycle, but irrespective of whether SCL goes high, which happens if a
> slave "stretch" the clock until it's ready to answer.
>
> Supposedly when that happen, CSR_MCF bit would be 0 at interrupt time,
> meaning bus is busy, and we have to poll until it goes to 1 meaning the
> slave has released SCL.

I share your reservation. The original re-read pre-dates git (I do 
recall looking in the historical repo as well and finding nothing 
enlightening). All I can say is that the original code thought I2C_SR 
needed some time to "stabilize".

Both MIF and MCF should be set at the falling edge of the ninth clock. 
In theory we could end up with MIF=1 MCF=0 if MAL is set (in which case 
we'd hit the 100us timeout in the poll). But I see no evidence of that 
actually happening (and no idea what arbitration lost means w.r.t i2c).

The root interrupts for I2C1 and I2C2 are shared so it may be possible 
for MIF to be in the process of being set for I2C1 but the actual mpic 
interrupt be raised for a different transfer on I2C2. The isr will look 
at both I2C adapters and attempt to handle the interrupt if MIF is set. 
I'd expect a spurious interrupt to be counted in this case as by the 
time I2C1 raises the interrupt with the mpic we'd have already serviced 
it (but maybe the fiddling with MEIN prevents that).

My best guess is that even if the host adapter has sent the ninth clock 
it doesn't mean that the remote device will release SCL (e.g. in the 
case of clock stretching or my slightly dodgy hardware). So I think the 
act of polling for MCF (or prior to this what was effectively a 
udelay(100)) allows the remote device a bit of time to release SCL.

> I have no slave that does clock stretching on my board so I cannot test
> the theory. On my mpc8347 device, i2c clock speed set to 90kHz, I've
> never seen a case where MCR was 0 at interrupt time.
>
> For i2c experts here, is 100us enough in that case ? I could not any
> maximum stretch time in i2c specification.

I don't know that there is a maximum clock stretch time (we certainly 
know there are misbehaving devices that hold SCL low forever). The SMBUS 
protocol adds some timeouts but as far as I know i2c says nothing about 
how long a remote device can hold SCL.

> My CPU user manual is IMO vague on this precise topic, hopefully an NXP
> knowledgeable employee will read this and enlighten us.

That would be nice (but there is a reason I've ended up being listed as 
the maintainer for this driver).

>
> Thanks,
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ