[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3423ea96-859d-4c4b-a9a7-e0d9c3c00727@norik.com>
Date: Mon, 25 Mar 2024 09:32:35 +0100
From: Andrej Picej <andrej.picej@...ik.com>
To: Jonathan Cameron <jic23@...nel.org>
Cc: haibo.chen@....com, linux-iio@...r.kernel.org,
devicetree@...r.kernel.org, lars@...afoo.de, shawnguo@...nel.org,
s.hauer@...gutronix.de, kernel@...gutronix.de, festevam@...il.com,
imx@...ts.linux.dev, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, robh@...nel.org,
krzysztof.kozlowski+dt@...aro.org, conor+dt@...nel.org,
upstream@...ts.phytec.de
Subject: Re: [PATCH 0/2] i.MX93 ADC calibration settings
Hi Jonathan,
On 24. 03. 24 14:55, Jonathan Cameron wrote:
> On Wed, 20 Mar 2024 11:04:04 +0100
> Andrej Picej <andrej.picej@...ik.com> wrote:
>
>> Hi all,
>>
>> we had some problems with failing ADC calibration on the i.MX93 boards.
>> Changing default calibration settings fixed this. The board where this
>> patches are useful is not yet upstream but will be soon (hopefully).
>
> Tell us more. My initial instinct is that this shouldn't be board specific.
> What's the trade off we are making here? Time vs precision of calibration or
> something else? If these are set to a level by default that doesn't work
> for our board, maybe we should just change them for all devices?
>
So we have two different boards with the same SoC. On one, the
calibration works with the default values, on the second one the
calibration fails, which makes the ADC unusable. What the ADC lines
measure differ between the boards though. But the implementation is
nothing out of the ordinary.
We tried different things but the only thing that helped is to use
different calibration properties. We tried deferring the probe and
calibration until later boot and after boot, but it did not help.
In the Reference Manual [1] (chapter 72.5.1) it is written:
> 4. Configure desired calibration settings (default values kept for highest accuracy maximum time).
So your assumption is correct, longer calibration time (more averaging
samples) -> higher precision. The default values go for a high accuracy.
And since we use a NRSMPL (Number of Averaging Samples) of 32 instead of
default 512, we reduce the accuracy so the calibration values pass the
internal defined limits.
I'm not sure that changing default values is the right solution here. We
saw default values work with one of the boards. And since the NXP kept
these values adjustable I think there is a reason behind it.
Note: When I say one of the boards I mean one board form. So same board
version, but different HW.
Best regards,
Andrej
[1] i.MX 93 Applications Processor Reference Manual, Rev. 4, 12/2023
Powered by blists - more mailing lists