[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3e8b18e6-673c-4ee6-a56b-08641c605efc@os.amperecomputing.com>
Date: Fri, 1 Dec 2023 13:31:22 +0700
From: Quan Nguyen <quan@...amperecomputing.com>
To: Jeremy Kerr <jk@...econstruct.com.au>,
Matt Johnston <matt@...econstruct.com.au>,
"David S . Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, openbmc@...ts.ozlabs.org,
Open Source Submission <patches@...erecomputing.com>
Cc: Phong Vo <phong@...amperecomputing.com>,
Thang Nguyen <thang@...amperecomputing.com>,
Dung Cao <dung@...amperecomputing.com>
Subject: Re: [PATCH] mctp i2c: Requeue the packet when arbitration is lost
On 30/11/2023 16:40, Jeremy Kerr wrote:
> Hi Quan,
>
>> With this commit, we all see the packets go through peacefully just
>> by requeueing the packets at MCTP layer and eliminated the need to
>> retry in PLDM layer which would need more time.
>
> It's certainly possible that this tries harder to send MCTP packets,
> but it's just duplicating the retry mechanism already present in the
> i2c core.
>
> Why do we need another retry mechanism here? How is the i2c core retry
> mechanism not sufficient?
>
> It would seem that you can get the same behaviour by adjusting the
> retries/timeout limits in the core code, which has the advantage of
> applying to all arbitration loss events on the i2c bus, not just for
> MCTP tx.
>
Hi Jeremy,
As per [1], __i2c_transfer() will retry for adap->retries times
consecutively (without any wait) within the amount of time specified by
adap->timeout.
So as per my observation, once it loses the arbitration, the next retry
is most likely still lost as another controller who win the arbitration
may still be using the bus. Especially for upper layer protocol message
like PLDM or SPDM, which size is far bigger than SMBus, usually ends up
to queue several MCTP packets at a time. But if to requeue the packet,
it must wait to acquire the lock before actually queueing that packet,
and that is more likely to increase the chance to win the arbitration
than to retry it right away as on the i2c core.
Another reason is that, as i2c is widely used for many other
applications, fixing the retry algorithm within the i2c core seems
impossible.
The other fact is that the initial default value of these two parameters
depends on each type of controller; I'm not sure why yet.
+ i2c-aspeed: retries=0 timeout=1 sec [2]
+ i2c-cadence: retries=3 timeout=1 sec [3]
+ i2c-designware: retries=3 timeout=1 sec [4], [5]
+ i2c-emev2: retries=5 timeout=1 sec [6]
+ ...
Unfortunately, in our case, we use i2c-aspeed, and there is only one try
(see [2]), and that means we have only one single shot. I'm not sure why
i2c-aspeed chose to set retries=0 by default, but I guess there must be
a reason behind it.
And yes, I agree, as per [7], these two parameters could be adjusted via
ioctl() from userspace if the user wishes to change them. But, honestly,
forcing users to change these parameters is not a good way, as I might
have to say.
To avoid that, requeueing the packet in the MCTP layer was kind of way
better choice, and it was confirmed via our case.
Thanks,
- Quan
[1]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/i2c/i2c-core-base.c#n2223
[2]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/i2c/busses/i2c-aspeed.c#n1030
[3]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/i2c/busses/i2c-cadence.c#n1322
[4]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/i2c/busses/i2c-designware-master.c#n1030
[5]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/i2c/busses/i2c-designware-slave.c#n258
[6]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/i2c/busses/i2c-emev2.c#n385
[7]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/i2c/i2c-dev.c#n478
Powered by blists - more mailing lists