[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a4581b79-c2b2-c2bd-b96c-98389ea15a1e@linaro.org>
Date: Mon, 10 Jan 2022 14:33:39 +0100
From: Daniel Lezcano <daniel.lezcano@...aro.org>
To: Ulf Hansson <ulf.hansson@...aro.org>
Cc: rjw@...ysocki.net, lukasz.luba@....com, robh@...nel.org,
heiko@...ech.de, arnd@...aro.org, linux-kernel@...r.kernel.org,
linux-pm@...r.kernel.org, Arnd Bergmann <arnd@...db.de>,
"open list:GENERIC INCLUDE/ASM HEADER FILES"
<linux-arch@...r.kernel.org>
Subject: Re: [PATCH v5 1/6] powercap/drivers/dtpm: Move dtpm table from init
to data section
On 07/01/2022 15:49, Ulf Hansson wrote:
> On Fri, 7 Jan 2022 at 14:15, Daniel Lezcano <daniel.lezcano@...aro.org> wrote:
>>
>> On 31/12/2021 14:33, Ulf Hansson wrote:
>>> On Sat, 18 Dec 2021 at 14:00, Daniel Lezcano <daniel.lezcano@...aro.org> wrote:
>>>>
>>>> The dtpm table is used to let the different dtpm backends to register
>>>> their setup callbacks in a single place and preventing to export
>>>> multiple functions all around the kernel. That allows the dtpm code to
>>>> be self-encapsulated.
>>>
>>> Well, that's not entirely true. The dtpm code and its backends (or
>>> ops, whatever we call them) are already maintained from a single
>>> place, the /drivers/powercap/* directory. I assume we intend to keep
>>> it like this going forward too, right?
>>>
>>> That is also what patch4 with the devfreq backend continues to conform to.
>>>
>>>>
>>>> The dtpm hierarchy will be passed as a parameter by a platform
>>>> specific code and that will lead to the creation of the different dtpm
>>>> nodes.
>>>>
>>>> The function creating the hierarchy could be called from a module at
>>>> init time or when it is loaded. However, at this moment the table is
>>>> already freed as it belongs to the init section and the creation will
>>>> lead to a invalid memory access.
>>>>
>>>> Fix this by moving the table to the data section.
>>>
>>> With the above said, I find it a bit odd to put a table in the data
>>> section like this. Especially, since the only remaining argument for
>>> why, is to avoid exporting functions, which isn't needed anyway.
>>>
>>> I mean, it would be silly if we should continue to put subsystem
>>> specific tables in here, to just let them contain a set of subsystem
>>> specific callbacks.
>>
>> So I tried to change the approach and right now I was not able to find
>> an alternative keeping the code self-encapsulate and without introducing
>> cyclic dependencies.
>>
>> I suggest to keep the patch as it is and double check if it makes sense
>> to change it after adding more dtpm backends
>>
>> Alternatively I can copy the table to a dynamically allocated table.
>
> I am not sure I understand the problem. You don't need a "table of
> callbacks" at all, at least to start with.
>
> Instead, what you need is to make a call to a function, or actually
> one call per supported dtpm type from dtpm_setup_dt() (introduced in
> patch2).
>
> For CPUs, you would simply call dtpm_cpu_setup() (introduced in
> patch3) from dtpm_setup_dt(), rather than walking the dtpm table an
> invoking the ->setup() callback.
>
> Did that make sense to you?
Yeah, I already got the point ;)
I'll convert it to something else, and we will see in the future if that
needs to be converted back to the table.
> Going forward, when we decide to introduce the option to add/remove
> support for dtpm types dynamically, you can then convert to a
> dynamically allocated table.
>
> [...]
>
> Kind regards
> Uffe
>
--
<http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog
Powered by blists - more mailing lists