[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221026035839.GB21523@ranerica-svr.sc.intel.com>
Date: Tue, 25 Oct 2022 20:58:39 -0700
From: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Ricardo Neri <ricardo.neri@...el.com>,
"Ravi V. Shankar" <ravi.v.shankar@...el.com>,
Ben Segall <bsegall@...gle.com>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Len Brown <len.brown@...el.com>, Mel Gorman <mgorman@...e.de>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
Steven Rostedt <rostedt@...dmis.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Valentin Schneider <vschneid@...hat.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, "Tim C . Chen" <tim.c.chen@...el.com>
Subject: Re: [RFC PATCH 12/23] thermal: intel: hfi: Convert table_lock to use
flags-handling variants
On Tue, Sep 27, 2022 at 01:34:07PM +0200, Peter Zijlstra wrote:
> On Fri, Sep 09, 2022 at 04:11:54PM -0700, Ricardo Neri wrote:
>
> > --- a/drivers/thermal/intel/intel_hfi.c
> > +++ b/drivers/thermal/intel/intel_hfi.c
> > @@ -175,9 +175,10 @@ static struct workqueue_struct *hfi_updates_wq;
> > static void get_hfi_caps(struct hfi_instance *hfi_instance,
> > struct thermal_genl_cpu_caps *cpu_caps)
> > {
> > + unsigned long flags;
> > int cpu, i = 0;
> >
> > - raw_spin_lock_irq(&hfi_instance->table_lock);
> > + raw_spin_lock_irqsave(&hfi_instance->table_lock, flags);
> > for_each_cpu(cpu, hfi_instance->cpus) {
> > struct hfi_cpu_data *caps;
> > s16 index;
(Another email I thought I had sent but did not. Sorry!)
>
> ^^^^ Anti-pattern alert!
>
> Now your IRQ latency depends on nr_cpus -- which is a fair fail. The
> existing code is already pretty crap in that it has the preemption
> latency depend on nr_cpus.
I see.
>
> While I'm here looking at the HFI stuff, did they fix that HFI interrupt
> broadcast mess already? Sending an interrupt to *all* CPUs is quite
> insane.
This issue has been raised with hardware teams and they are looking into
fixes. The issue, however, may persist on several models as while a fix
propagates.
Thanks and BR,
Ricardo
Powered by blists - more mailing lists