[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHp75Ve=CU8ecXk5sgkHPJbYA_K+sa+Lyys+cdpCm=QHOw2ytg@mail.gmail.com>
Date: Wed, 19 Nov 2025 11:27:45 +0200
From: Andy Shevchenko <andy.shevchenko@...il.com>
To: Daniel Lezcano <daniel.lezcano@...aro.org>
Cc: jic23@...nel.org, dlechner@...libre.com, nuno.sa@...log.com,
andy@...nel.org, robh@...nel.org, conor+dt@...nel.org, krzk+dt@...nel.org,
linux-iio@...r.kernel.org, s32@....com, linux-kernel@...r.kernel.org,
devicetree@...r.kernel.org, chester62515@...il.com, mbrugger@...e.com,
ghennadi.procopciuc@....nxp.com, vkoul@...nel.org
Subject: Re: [PATCH v6 2/2] iio: adc: Add the NXP SAR ADC support for the
s32g2/3 platforms
On Tue, Nov 18, 2025 at 10:34 PM Daniel Lezcano
<daniel.lezcano@...aro.org> wrote:
Thanks for the update, my comments below (but we are almost done).
> The NXP S32G2 and S32G3 platforms integrate a successive approximation
> register (SAR) ADC. Two instances are available, each providing 8
> multiplexed input channels with 12-bit resolution. The conversion rate
> is up to 1 Msps depending on the configuration and sampling window.
>
> The SAR ADC supports raw, buffer, and trigger modes. It can operate
> in both single-shot and continuous conversion modes, with optional
> hardware triggering through the cross-trigger unit (CTU) or external
> events. An internal prescaler allows adjusting the sampling clock,
> while per-channel programmable sampling times provide fine-grained
> trade-offs between accuracy and latency. Automatic calibration is
> performed at probe time to minimize offset and gain errors.
>
> The driver is derived from the BSP implementation and has been partly
> rewritten to comply with upstream requirements. For this reason, all
> contributors are listed as co-developers, while the author refers to
> the initial BSP driver file creator.
I would move this paragraph closer to the tag block and FWIW we also
have Originally-by: tag in case it suits better (usually when the
original code is rewritten more than ~67%).
> All modes have been validated on the S32G274-RDB2 platform using an
> externally generated square wave captured by the ADC. Tests covered
> buffered streaming via IIO, trigger synchronization, and accuracy
> verification against a precision laboratory signal source.
>
> One potential scenario, not detected during testing, is that in some
> corner cases the DMA may already have been armed for the next
> transfer, which can lead dmaengine_tx_status() to return an incorrect
> residue. The callback_result() operation—intended to supply the
> residue directly and eliminate the need to call
> dmaengine_tx_status()—also does not work. Attempting to use
> dmaengine_pause() and dmaengine_resume() to prevent the residue from
> being updated does not work either.
>
> This potential scenario should apply to any driver using cyclic DMA.
> However, no current driver actually handles this case, and they all rely
> on the same acquisition routine (e.g., the STM32 implementation).
> The NXP SAR acquisition routine has been used in production for several
> years, which is a good indication of its robustness.
>
> As the IIO is implementing the cyclic DMA support API, it is not worth
> to do more spins to the current routine as it will go away when the
> new API will be available.
...
> +#define NXP_SAR_ADC_EOC_CH(c) BIT((c) % 32)
Do you expect "c" to be bigger than 31? In which circumstances?
...
> +#define NXP_SAR_ADC_CTR_INPSAMP_MIN 0x08
> +#define NXP_SAR_ADC_CTR_INPSAMP_MAX 0xFF
Keep the style consistent (I think you want small hexadecimal letters
in the value).
...
> +/* Other field define */
> +#define NXP_SAR_ADC_CONV_TIMEOUT_MS 100
> +#define NXP_SAR_ADC_CONV_TIMEOUT_JF (msecs_to_jiffies(NXP_SAR_ADC_CONV_TIMEOUT_MS))
I don't see the use of the first definition. Dropping _JF (we usually
assume the timeouts without units in "ticks") and using 100 directly
as a parameter to the msecs_to_jiffies() will make it clear enough.
...
> +static bool nxp_sar_adc_set_enabled(struct nxp_sar_adc *info, bool enable)
> +{
> + u32 mcr;
> + bool pwdn;
> +
> + mcr = readl(NXP_SAR_ADC_MCR(info->regs));
> + /* Return the current state. */
> + pwdn = FIELD_GET(NXP_SAR_ADC_MCR_PWDN, mcr);
The comment is a bit odd. I think you want to say "Save the current
state and return it later" or something like this.
> + /* When the enabled flag is not set, we set the power down bit */
> + FIELD_MODIFY(NXP_SAR_ADC_MCR_PWDN, &mcr, !enable);
> +
> + writel(mcr, NXP_SAR_ADC_MCR(info->regs));
> +
> + /*
> + * Ensure there are at least three cycles between the
> + * configuration of NCMR and the setting of NSTART.
> + */
> + if (enable)
> + ndelay(div64_u64(NSEC_PER_SEC, clk_get_rate(info->clk) * 3));
I'm wondering how low the clock rate can be? With low enough clock
rates this becomes a 100% CPU busyloop and in atomic context (is this
the case?) without even the possibility to schedule.
> + return pwdn;
> +}
...
> +static int nxp_sar_adc_calibration(struct nxp_sar_adc *info)
> +{
> + int ret;
> +
> + /* Calibration works only if the adc is powered up. */
ADC
> + nxp_sar_adc_enable(info);
> +
> + /* The calibration operation starts. */
> + nxp_sar_adc_calibration_start(info->regs);
> +
> + ret = nxp_sar_adc_calibration_wait(info->regs);
> +
> + /*
> + * Calibration works only if the adc is powered up. However
ADC
> + * the calibration is called from the probe function where the
> + * iio is not enabled, so we disable after the calibration.
> + */
> + nxp_sar_adc_disable(info);
> +
> + return ret;
> +}
...
> +static int nxp_sar_adc_read_data(struct nxp_sar_adc *info, unsigned int chan)
> +{
> + u32 ceocfr, cdr;
> +
> + ceocfr = readl(NXP_SAR_ADC_CEOCFR0(info->regs));
> + /* FIELD_GET() can not be used here because EOC_CH is not constant */
> + if (!(NXP_SAR_ADC_EOC_CH(chan) & ceocfr))
> + return -EIO;
[nxp_sar_adc_]field_get() may be defined and used. There is a series
pending to bring field_get() to bitfield.h next release.
> + cdr = readl(NXP_SAR_ADC_CDR(info->regs, chan));
> + if (!(FIELD_GET(NXP_SAR_ADC_CDR_VALID, cdr)))
> + return -EIO;
> +
> + return FIELD_GET(NXP_SAR_ADC_CDR_CDATA_MASK, cdr);
> +}
...
> +static irqreturn_t nxp_sar_adc_isr(int irq, void *dev_id)
> +{
> + struct iio_dev *indio_dev = (struct iio_dev *)dev_id;
Unneeded explicit casting.
> + struct nxp_sar_adc *info = iio_priv(indio_dev);
> + int isr;
> +
> + isr = readl(NXP_SAR_ADC_ISR(info->regs));
> + if (!(FIELD_GET(NXP_SAR_ADC_ISR_ECH, isr)))
> + return IRQ_NONE;
> +
> + if (iio_buffer_enabled(indio_dev))
> + nxp_sar_adc_isr_buffer(indio_dev);
> + else
> + nxp_sar_adc_isr_read_raw(indio_dev);
> +
> + writel(NXP_SAR_ADC_ISR_ECH, NXP_SAR_ADC_ISR(info->regs));
> +
> + return IRQ_HANDLED;
> +}
> +static void nxp_sar_adc_channels_disable(struct nxp_sar_adc *info, u32 mask)
> +{
> + u32 ncmr, cimr;
> +
> + ncmr = readl(NXP_SAR_ADC_NCMR0(info->regs));
> + cimr = readl(NXP_SAR_ADC_CIMR0(info->regs));
> +
> + /* FIELD_MODIFY() can not be used because the mask is not constant */
> + ncmr &= ~mask;
> + cimr &= ~mask;
> +
> + writel(ncmr, NXP_SAR_ADC_NCMR0(info->regs));
> + writel(cimr, NXP_SAR_ADC_CIMR0(info->regs));
> +}
...
> +static void nxp_sar_adc_stop_conversion(struct nxp_sar_adc *info)
> +{
> + u32 mcr;
> +
> + mcr = readl(NXP_SAR_ADC_MCR(info->regs));
> +
> + FIELD_MODIFY(NXP_SAR_ADC_MCR_NSTART, &mcr, 0x0);
> +
> + writel(mcr, NXP_SAR_ADC_MCR(info->regs));
> +
> + /*
> + * On disable, we have to wait for the transaction to finish.
> + * ADC does not abort the transaction if a chain conversion
> + * is in progress.
> + * Wait for the worst case scenario - 80 ADC clk cycles.
> + */
> + ndelay(div64_u64(NSEC_PER_SEC, clk_get_rate(info->clk)) * 80);
Same comment / question about the possible too long delays.
> +}
> +
> +static int nxp_sar_adc_start_conversion(struct nxp_sar_adc *info, bool raw)
> +{
> + u32 mcr;
> +
> + mcr = readl(NXP_SAR_ADC_MCR(info->regs));
> +
> + FIELD_MODIFY(NXP_SAR_ADC_MCR_NSTART, &mcr, 0x1);
> + FIELD_MODIFY(NXP_SAR_ADC_MCR_MODE, &mcr, !raw);
raw ? 0 : 1
is better to understand (it will be optimised by the compiler anyway,
no branches will be added).
> +
> + writel(mcr, NXP_SAR_ADC_MCR(info->regs));
> +
> + return 0;
> +}
> +
> +static int nxp_sar_adc_read_channel(struct nxp_sar_adc *info, int channel)
> +{
> + int ret;
> +
> + info->current_channel = channel;
> + nxp_sar_adc_channels_enable(info, BIT(channel));
> + nxp_sar_adc_irq_cfg(info, true);
> + nxp_sar_adc_enable(info);
> +
> + reinit_completion(&info->completion);
> + ret = nxp_sar_adc_start_conversion(info, true);
> + if (ret < 0)
> + goto out_disable;
> + ret = wait_for_completion_interruptible_timeout(&info->completion,
> + NXP_SAR_ADC_CONV_TIMEOUT_JF);
> + if (ret == 0)
> + ret = -ETIMEDOUT;
> + if (ret > 0)
> + ret = 0;
Since semantically it's not the same ret, I would write above as
if (!wait_for_completion...(...))
ret = -ETIMEDOUT;
And note, no "else" branch is needed in this case.
> + nxp_sar_adc_stop_conversion(info);
> +
> +out_disable:
> + nxp_sar_adc_channels_disable(info, BIT(channel));
> + nxp_sar_adc_irq_cfg(info, false);
> + nxp_sar_adc_disable(info);
> +
> + return ret;
> +}
> + /*
> + * Configures the sample period duration in terms of the SAR
> + * controller clock. The minimum acceptable value is 8.
> + * Configuring it to a value lower than 8 sets the sample period
> + * to 8 cycles. We read the clock value and divide by the
> + * sampling timing which gives us the number of cycles expected.
> + * The value is 8 bits wide, consequently the max value is 0xFF.
8-bit wide
> + */
...
> + /*
> + * DMA in some corner cases might have already be charged for
> + * the next transfer. Potentially there can be a race where
> + * the residue changes while the dma engine updates the
> + * buffer. That could be handled by using the
> + * callback_result() instead of callback() because the residue
> + * will be passed as parameter to the function. However this
as a parameter
> + * new callback is pretty new and the backend does not update
> + * the residue. So let's stick to the version other drivers do
> + * which has proven running well in production since several
> + * years.
> + */
...
> + /*
> + * iio_push_to_buffers_with_ts should not be
iio_push_to_buffers_with_ts()
> + * called with dma_samples as parameter. The samples
> + * will be smashed if timestamp is enabled.
> + */
...
> + nxp_sar_adc_channels_disable(info, *indio_dev->active_scan_mask);
Wondering why this can't take a pointer to a mask.
...
> + info = iio_priv(indio_dev);
> +
Unneeded blank line.
> + info->vref_mV = data->vref_mV;
...
> + ret = devm_request_irq(dev, irq, nxp_sar_adc_isr, 0,
> + dev_name(dev), indio_dev);
> + if (ret < 0)
> + return dev_err_probe(dev, ret, "failed requesting irq, irq = %d\n", irq);
No error code duplication in the message, please.
...
> + spin_lock_init(&info->lock);
Shouldn't this be _before_ IRQ registration? Theoretically the IRQ
may fire already just after the registration (yeah, it might be
spurious, but handler and code should be ready for this).
...
> + ret = nxp_sar_adc_calibration(info);
> + if (ret)
> + dev_err_probe(dev, ret, "Calibration failed: %d\n", ret);
No error code duplication in the message, please.
...
> +static int nxp_sar_adc_suspend(struct device *dev)
> +{
> + struct iio_dev *indio_dev = dev_get_drvdata(dev);
> + struct nxp_sar_adc *info = iio_priv(indio_dev);
Can be one-lined
struct nxp_sar_adc *info = iio_priv(dev_get_drvdata(dev));
> + info->pwdn = nxp_sar_adc_disable(info);
> + info->inpsamp = nxp_sar_adc_conversion_timing_get(info);
> +
> + clk_disable_unprepare(info->clk);
> +
> + return 0;
> +}
> +
> +static int nxp_sar_adc_resume(struct device *dev)
> +{
> + struct iio_dev *indio_dev = dev_get_drvdata(dev);
> + struct nxp_sar_adc *info = iio_priv(indio_dev);
Ditto.
> + int ret;
> +
> + ret = clk_prepare_enable(info->clk);
> + if (ret)
> + return ret;
> +
> + nxp_sar_adc_conversion_timing_set(info, info->inpsamp);
> +
> + if (!info->pwdn)
> + nxp_sar_adc_enable(info);
> +
> + return 0;
> +}
...
> +static const struct nxp_sar_adc_data s32g2_sar_adc_data = {
> + .vref_mV = 1800,
> + .model = "s32g2-sar-adc"
Leave trailing comma, it's not a terminator.
> +};
--
With Best Regards,
Andy Shevchenko
Powered by blists - more mailing lists