[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <jhjtuzj2mn1.mognet@arm.com>
Date: Wed, 10 Jun 2020 12:20:02 +0100
From: Valentin Schneider <valentin.schneider@....com>
To: Benjamin Gaignard <benjamin.gaignard@...com>
Cc: hugues.fruchet@...com, mchehab@...nel.org,
mcoquelin.stm32@...il.com, alexandre.torgue@...com,
linux-media@...r.kernel.org,
linux-stm32@...md-mailman.stormreply.com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
vincent.guittot@...aro.org, rjw@...ysocki.net
Subject: Re: [PATCH v5 2/3] media: stm32-dcmi: Set minimum cpufreq requirement
Hi Benjamin,
On 09/06/20 12:58, Benjamin Gaignard wrote:
> +static void dcmi_set_min_frequency(struct stm32_dcmi *dcmi, s32 freq)
> +{
> + struct irq_affinity_notify *notify = &dcmi->notify;
> + struct cpumask clear;
> +
> + mutex_lock(&dcmi->freq_lock);
> + dcmi->targeted_frequency = freq;
> + mutex_unlock(&dcmi->freq_lock);
> +
> + if (freq) {
> + dcmi_irq_notifier_notify(notify,
> + irq_get_affinity_mask(dcmi->irq));
> + } else {
> + cpumask_clear(&clear);
> + dcmi_irq_notifier_notify(notify, &clear);
> + }
> +}
> +
IIUC the changes in this version, you would now need a call to
freq_qos_update_request() in the notifier. That's because you can now go
through the notifier callback with
targeted_frequency = FREQ_QOS_MIN_DEFAULT_VALUE
yet still add CPUs to the boosted mask. I think you were pretty close to a
decent solution in your previous version, with some notifier registration
movement. This is what I had in mind (the diff is against v4; ofc
absolutely untested!):
---
diff --git a/drivers/media/platform/stm32/stm32-dcmi.c b/drivers/media/platform/stm32/stm32-dcmi.c
index c2389776a958..cc147de6ea70 100644
--- a/drivers/media/platform/stm32/stm32-dcmi.c
+++ b/drivers/media/platform/stm32/stm32-dcmi.c
@@ -801,15 +801,22 @@ static void dcmi_set_min_frequency(struct stm32_dcmi *dcmi, s32 freq)
struct irq_affinity_notify *notify = &dcmi->notify;
if (freq) {
+ /*
+ * Register the notifier before doing any change, so the
+ * callback can be queued if an affinity change happens *while*
+ * we are requesting the boosts.
+ */
+ irq_set_affinity_notifier(dcmi->irq, notify);
dcmi_irq_notifier_notify(notify,
irq_get_affinity_mask(dcmi->irq));
-
- notify->notify = dcmi_irq_notifier_notify;
- notify->release = dcmi_irq_notifier_release;
- irq_set_affinity_notifier(dcmi->irq, notify);
} else {
struct cpumask clear;
+ /*
+ * Unregister the notifier before clearing the boost requests,
+ * as we don't want to boost again if an affinity change happens
+ * *while* we are clearing the requests
+ */
irq_set_affinity_notifier(dcmi->irq, NULL);
cpumask_clear(&clear);
dcmi_irq_notifier_notify(notify, &clear);
@@ -2032,6 +2039,9 @@ static int dcmi_probe(struct platform_device *pdev)
if (!alloc_cpumask_var(&dcmi->boosted, GFP_KERNEL))
return -ENODEV;
+ dcmi->notify->notify = dcmi_irq_notifier_notify;
+ dcmi->notify->release = dcmi_irq_notifier_release;
+
q = &dcmi->queue;
dcmi->v4l2_dev.mdev = &dcmi->mdev;
---
Does that make sense to you?
Powered by blists - more mailing lists