[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20240501011629.GB29652@ranerica-svr.sc.intel.com>
Date: Tue, 30 Apr 2024 18:16:29 -0700
From: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
To: "Zhang, Rui" <rui.zhang@...el.com>
Cc: "Wysocki, Rafael J" <rafael.j.wysocki@...el.com>,
"srinivas.pandruvada@...ux.intel.com" <srinivas.pandruvada@...ux.intel.com>,
"Brown, Len" <len.brown@...el.com>,
"stanislaw.gruszka@...ux.intel.com" <stanislaw.gruszka@...ux.intel.com>,
"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Neri, Ricardo" <ricardo.neri@...el.com>
Subject: Re: [PATCH 4/4] thermal: intel: hfi: Tune the number of CPU
capabilities per netlink event
On Tue, Apr 30, 2024 at 05:07:54AM +0000, Zhang, Rui wrote:
> On Mon, 2024-04-29 at 16:41 -0700, Ricardo Neri wrote:
> > The number of updated CPU capabilities per netlink event is hard-
> > coded to
> > 16. On systems with more than 16 it takes more than one thermal
> > netlink
> > event to relay all the new capabilities when processing an HFI
> > interrupt.
> > This adds unnecessary overhead.
> >
> > Make the number of updated capabilities per event tuneable via
> > debugfs.
> > Users can then experiment with different values.
> >
> Is there a limitation about the number of CPUs supported in one netlink
> event?
IIUC, the only limit is the size of the buffer for the message. intel_hfi
allocates the message based on the number of CPUs of the HFI instance.
>
> IMO, we still have to use a fixed number here because debugfs can be
> changed by someone else, and userspace application like intel-lpmd
> cannot make assumption that the netlink message follows what it set.
but you don't know how many messages with 16-CPUs payload you will receive
for a single update, no? Yes, you can infer it from the number of online
CPUs, but still.
But yes, now lpmd would receive an unknown number of messages with payloads
of unknown size.
>
> or can we append one magic item in the end of one update?
> userspace can just check the magic item no matter the number of CPU per
> netlink event.
AFAIK, only HFI and lmpd use the CPU capabilities thermal netlink message.
I guess it could be done.
Powered by blists - more mailing lists