[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <275e34c2e67a85c087ff983354bf74b5257b2fc4.camel@linux.intel.com>
Date: Tue, 12 Sep 2023 12:44:10 -0700
From: srinivas pandruvada <srinivas.pandruvada@...ux.intel.com>
To: "Rafael J. Wysocki" <rafael@...nel.org>
Cc: daniel.lezcano@...aro.org, rui.zhang@...el.com,
linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 0/7] thermal: processor_thermal: Suport workload hint
Hi Rafael,
On Tue, 2023-09-12 at 16:09 +0200, Rafael J. Wysocki wrote:
> On Tue, Aug 29, 2023 at 2:23 AM Srinivas Pandruvada
> <srinivas.pandruvada@...ux.intel.com> wrote:
> >
> >
[...]
> > --
>
> There is a slight issue with the patch ordering in this series,
> because the interface to enable the interrupt should only be provided
> after implementing the interrupt handlers. I don't think that anyone
> will apply the series partially and try to enable the feature,
> though.
Thanks!
>
> Also, I'm not actually sure if proc_thermal_wt_intr_callback() can
> run
> safely against the work item scheduled in proc_thermal_irq_handler()
> in case the workload hint one triggers along with a thermal threshold
> one. I think that the access to MMIO is cached, so what if they both
> try to update the same cache line at the same time? Or are they
> guaranteed to be different cache lines?
These two registers are 90 cache lines apart. Looking at all the
registers on this bar for status offsets, they are several cache lines
apart. Also this bar is non prefetchable, so continuous data can't be
fetched ahead.
>
> Anyway, tentatively applied as 6.7 material, but I've changed the
> second patch somewhat, because I couldn't convince myself that the
> implicit type conversions in
> processor_thermal_mbox_interrupt_config()
> would always do the right thing regardless of the numbers involved,
> so
> please check the result in my bleeding-edge branch.
>
If I diff, there is only one change in processor_thermal_mbox.c. Tested
that change and works fine.
Thanks,
Srinivas
> Thanks!
Powered by blists - more mailing lists