lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 26 Mar 2024 21:57:01 +0100
From: Stephan Gerhold <stephan@...hold.net>
To: Konrad Dybcio <konrad.dybcio@...aro.org>
Cc: Bjorn Andersson <andersson@...nel.org>,
	Georgi Djakov <djakov@...nel.org>, Shawn Guo <shawn.guo@...aro.org>,
	Marijn Suijten <marijn.suijten@...ainline.org>,
	linux-arm-msm@...r.kernel.org, linux-pm@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/4] interconnect: qcom: icc-rpm: Remodel how QoS
 settings are stored

On Tue, Mar 26, 2024 at 08:42:35PM +0100, Konrad Dybcio wrote:
> Currently, the QoS settings are stored in the node data, even though
> they're a property of the bus/provider instead. Moreover, they are only
> needed during the probe step, so they can be easily moved into struct
> qcom_icc_desc.
> 
> Reshuffle things around to make it anywhere near readable & comparable
> with a reference. As a nice bonus, a lot of bytes are shaved off and
> a few miliseconds are shaved off here and there.
> 
> As an example, bloat-o-meter reports this on sm6115.o:
> Total: Before=14799, After=13263, chg -10.38%
> 
> Signed-off-by: Konrad Dybcio <konrad.dybcio@...aro.org>
> ---
>  drivers/interconnect/qcom/icc-rpm.c | 123 +++++----
>  drivers/interconnect/qcom/icc-rpm.h |  13 +-
>  drivers/interconnect/qcom/msm8909.c | 268 ++++++++++---------
>  drivers/interconnect/qcom/msm8916.c | 153 ++++++-----
>  drivers/interconnect/qcom/msm8939.c | 157 ++++++-----
>  drivers/interconnect/qcom/msm8996.c | 517 +++++++++++++++++-------------------
>  drivers/interconnect/qcom/qcm2290.c | 416 +++++++++++++++++------------
>  drivers/interconnect/qcom/sdm660.c  | 393 +++++++++++++--------------
>  drivers/interconnect/qcom/sm6115.c  | 239 ++++++++++++-----
>  9 files changed, 1224 insertions(+), 1055 deletions(-)
> 
> [...]
> @@ -70,20 +68,18 @@ struct qcom_icc_provider {
>  };
>  
>  /**
> - * struct qcom_icc_qos - Qualcomm specific interconnect QoS parameters
> + * struct qcom_icc_qos_data - Qualcomm specific interconnect QoS parameters
>   * @areq_prio: node requests priority
>   * @prio_level: priority level for bus communication
>   * @limit_commands: activate/deactivate limiter mode during runtime
> - * @ap_owned: indicates if the node is owned by the AP or by the RPM
>   * @qos_mode: default qos mode for this node
>   * @qos_port: qos port number for finding qos registers of this node
>   * @urg_fwd_en: enable urgent forwarding
>   */
> -struct qcom_icc_qos {
> +struct qcom_icc_qos_data {
>  	u32 areq_prio;
>  	u32 prio_level;
>  	bool limit_commands;
> -	bool ap_owned;
>  	int qos_mode;
>  	int qos_port;
>  	bool urg_fwd_en;

Side note: There is a potential for more micro-optimization here: You
could save 4 bytes of padding if you move all bools together at the end
of the struct. :D

> [...]
> @@ -134,6 +131,8 @@ struct qcom_icc_desc {
>  	bool keep_alive;
>  	enum qcom_icc_type type;
>  	const struct regmap_config *regmap_cfg;
> +	const struct qcom_icc_qos_data * const qos_data;
> +	const u16 qos_data_num;
>  	unsigned int qos_offset;

Nitpick: Why is the u16 const when the other (non-pointer) members are
not? The u16 also feels a bit like overkill here. The struct would have
exactly the same size with a full unsigned int because of padding.

Alternatively, you could consider using an empty last entry as sentinel
instead of adding the count (i.e. with NOC_QOS_MODE_INVALID = 0). Not
sure what is cleaner here.

I haven't looked closely at the actual conversion of the definitions in
the drivers. What is the chance that you made an accidental mistake in
there? Or was it scripted? :D

Thanks,
Stephan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ