lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 30 Apr 2024 05:07:54 +0000
From: "Zhang, Rui" <rui.zhang@...el.com>
To: "ricardo.neri-calderon@...ux.intel.com"
	<ricardo.neri-calderon@...ux.intel.com>, "Wysocki, Rafael J"
	<rafael.j.wysocki@...el.com>
CC: "srinivas.pandruvada@...ux.intel.com"
	<srinivas.pandruvada@...ux.intel.com>, "Brown, Len" <len.brown@...el.com>,
	"stanislaw.gruszka@...ux.intel.com" <stanislaw.gruszka@...ux.intel.com>,
	"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "Neri,
 Ricardo" <ricardo.neri@...el.com>
Subject: Re: [PATCH 4/4] thermal: intel: hfi: Tune the number of CPU
 capabilities per netlink event

On Mon, 2024-04-29 at 16:41 -0700, Ricardo Neri wrote:
> The number of updated CPU capabilities per netlink event is hard-
> coded to
> 16. On systems with more than 16 it takes more than one thermal
> netlink
> event to relay all the new capabilities when processing an HFI
> interrupt.
> This adds unnecessary overhead.
> 
> Make the number of updated capabilities per event tuneable via
> debugfs.
> Users can then experiment with different values.
> 
Is there a limitation about the number of CPUs supported in one netlink
event?

IMO, we still have to use a fixed number here because debugfs can be
changed by someone else, and userspace application like intel-lpmd
cannot make assumption that the netlink message follows what it set.

or can we append one magic item in the end of one update?
userspace can just check the magic item no matter the number of CPU per
netlink event.

thanks,
rui

> We already take the hfi_instance_lock when submitting thermal netlink
> updates. Use it to serialize debugfs accesses to
> hfi_therm_notify_count.
> 
> Suggested-by: Srinivas Pandruvada
> <srinivas.pandruvada@...ux.intel.com>
> Signed-off-by: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
> ---
> Cc: Len Brown <len.brown@...el.com>
> Cc: Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>
> Cc: Stanislaw Gruszka <stanislaw.gruszka@...ux.intel.com>
> Cc: Zhang Rui <rui.zhang@...el.com>
> Cc: linux-pm@...r.kernel.org
> Cc: linux-kernel@...r.kernel.org
> ---
>  drivers/thermal/intel/intel_hfi.c | 34 ++++++++++++++++++++++++++---
> --
>  1 file changed, 29 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/thermal/intel/intel_hfi.c
> b/drivers/thermal/intel/intel_hfi.c
> index d6d3544509fc..d5163b9766c0 100644
> --- a/drivers/thermal/intel/intel_hfi.c
> +++ b/drivers/thermal/intel/intel_hfi.c
> @@ -175,6 +175,7 @@ static struct workqueue_struct *hfi_updates_wq;
>  
>  /* Keep this variable 8-byte aligned to get atomic accesses. */
>  static unsigned long hfi_update_delay = HFI_UPDATE_DELAY;
> +static int hfi_thermnl_caps_per_event = HFI_THERMNL_CAPS_PER_EVENT;
>  
>  #ifdef CONFIG_DEBUG_FS
>  static int hfi_update_delay_get(void *data, u64 *val)
> @@ -205,6 +206,25 @@ static int hfi_update_delay_set(void *data, u64
> val)
>  DEFINE_DEBUGFS_ATTRIBUTE(hfi_update_delay_fops,
> hfi_update_delay_get,
>                          hfi_update_delay_set, "%llu\n");
>  
> +static int hfi_thermnl_caps_per_event_get(void *data, u64 *val)
> +{
> +       mutex_lock(&hfi_instance_lock);
> +       *val = hfi_thermnl_caps_per_event;
> +       mutex_unlock(&hfi_instance_lock);
> +       return 0;
> +}
> +
> +static int hfi_thermnl_caps_per_event_set(void *data, u64 val)
> +{
> +       mutex_lock(&hfi_instance_lock);
> +       hfi_thermnl_caps_per_event = val;
> +       mutex_unlock(&hfi_instance_lock);
> +       return 0;
> +}
> +
> +DEFINE_DEBUGFS_ATTRIBUTE(hfi_thermnl_caps_per_event_fops,
> +                        hfi_thermnl_caps_per_event_get,
> +                        hfi_thermnl_caps_per_event_set, "%llu\n");
>  static struct dentry *hfi_debugfs_dir;
>  
>  static void hfi_debugfs_unregister(void)
> @@ -226,6 +246,11 @@ static void hfi_debugfs_register(void)
>         if (!f)
>                 goto err;
>  
> +       f = debugfs_create_file("thermnl_caps_per_event", 0644,
> hfi_debugfs_dir,
> +                               NULL,
> &hfi_thermnl_caps_per_event_fops);
> +       if (!f)
> +               goto err;
> +
>         return;
>  err:
>         hfi_debugfs_unregister();
> @@ -286,16 +311,15 @@ static void update_capabilities(struct
> hfi_instance *hfi_instance)
>  
>         get_hfi_caps(hfi_instance, cpu_caps);
>  
> -       if (cpu_count < HFI_THERMNL_CAPS_PER_EVENT)
> +       if (cpu_count < hfi_thermnl_caps_per_event)
>                 goto last_cmd;
>  
>         /* Process complete chunks of HFI_THERMNL_CAPS_PER_EVENT
> capabilities. */
>         for (i = 0;
> -            (i + HFI_THERMNL_CAPS_PER_EVENT) <= cpu_count;
> -            i += HFI_THERMNL_CAPS_PER_EVENT)
> -
>                thermal_genl_cpu_capability_event(HFI_THERMNL_CAPS_PER_
> EVENT,
> +            (i + hfi_thermnl_caps_per_event) <= cpu_count;
> +            i += hfi_thermnl_caps_per_event)
> +               thermal_genl_cpu_capability_event(hfi_thermnl_caps_pe
> r_event,
>                                                   &cpu_caps[i]);
> -
>         cpu_count = cpu_count - i;
>  
>  last_cmd:

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ