lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240508014530.GB25316@hu-mdtipton-lv.qualcomm.com>
Date: Tue, 7 May 2024 18:45:30 -0700
From: Mike Tipton <quic_mdtipton@...cinc.com>
To: Konrad Dybcio <konrad.dybcio@...aro.org>
CC: Bjorn Andersson <andersson@...nel.org>, Georgi Djakov <djakov@...nel.org>,
        Shawn Guo <shawn.guo@...aro.org>,
        Marijn Suijten
	<marijn.suijten@...ainline.org>,
        <linux-arm-msm@...r.kernel.org>, <linux-pm@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, Stephan Gerhold
	<stephan@...hold.net>,
        <quic_okukatla@...cinc.com>
Subject: Re: [PATCH 4/4] interconnect: qcom: icc-rpm: Remodel how QoS
 settings are stored

Hi Konrad,

On Tue, Mar 26, 2024 at 08:42:35PM +0100, Konrad Dybcio wrote:
> Currently, the QoS settings are stored in the node data, even though
> they're a property of the bus/provider instead. Moreover, they are only
> needed during the probe step, so they can be easily moved into struct
> qcom_icc_desc.

The QoS settings *are* fundamentally a property of the node. The nodes
are 1:1 with the NOC ports. And the QoS settings tune the priority of
the data coming out of those ports. So, logically speaking, the QoS data
does belong in the node structs along with the rest of the data for that
node and port.

Only a subset of NOC ports support configurable QoS, but for those ports
that do it's a property of the port itself. Those settings impact just
that specific port and nothing else.

The current method of directly embedding the qcom_icc_qos_data struct
into qcom_icc_node isn't optimal, since that data is irrelevant for
ports that don't support it. So, the size could be optimized by
converting qcom_icc_node::qos into a pointer instead. But I don't think
we should separate the QoS settings from node struct entirely. It makes
it very difficult to understand which QoS settings are impacting which
port.

For example...

>  
> diff --git a/drivers/interconnect/qcom/msm8996.c b/drivers/interconnect/qcom/msm8996.c
> index 788131400cd1..96c8ea8edd7a 100644
> --- a/drivers/interconnect/qcom/msm8996.c
> +++ b/drivers/interconnect/qcom/msm8996.c
> @@ -43,11 +43,7 @@ static struct qcom_icc_node mas_pcie_0 = {
>  	.buswidth = 8,
>  	.mas_rpm_id = 65,
>  	.slv_rpm_id = -1,
> -	.qos.ap_owned = true,
> -	.qos.qos_mode = NOC_QOS_MODE_FIXED,
> -	.qos.areq_prio = 1,
> -	.qos.prio_level = 1,
> -	.qos.qos_port = 0,
> +	.ap_owned = true,
>  	.num_links = ARRAY_SIZE(mas_a0noc_common_links),
>  	.links = mas_a0noc_common_links
>  };

[...]

>  
> +static const struct qcom_icc_qos_data a0noc_qos_data[] = {
> +	{
> +		.qos_port = 0,
> +		.qos_mode = NOC_QOS_MODE_FIXED,
> +		.areq_prio = 1,
> +		.prio_level = 1,
> +		.urg_fwd_en = false,
> +		.limit_commands = false,
> +	}, {

How can I tell that these a0noc_qos_data[0] settings are for the
mas_pcie_0 port? It's not possible from the code anymore. *We* could
figure it out internally by looking at the NOC SWI to determine the
qos_port index. But this should be obvious from the code itself.

> +		.qos_port = 1,
> +		.qos_mode = NOC_QOS_MODE_FIXED,
> +		.areq_prio = 1,
> +		.prio_level = 1,
> +		.urg_fwd_en = false,
> +		.limit_commands = false,
> +	}, {
> +		.qos_port = 2,
> +		.qos_mode = NOC_QOS_MODE_FIXED,
> +		.areq_prio = 1,
> +		.prio_level = 1,
> +		.urg_fwd_en = false,
> +		.limit_commands = false,
> +	},
> +};
> +

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ