lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20201118041228.GG8532@builder.lan>
Date:   Tue, 17 Nov 2020 22:12:28 -0600
From:   Bjorn Andersson <bjorn.andersson@...aro.org>
To:     Mike Tipton <mdtipton@...eaurora.org>
Cc:     Georgi Djakov <georgi.djakov@...aro.org>, linux-pm@...r.kernel.org,
        linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] interconnect: qcom: qcs404: Remove gpu and display nodes

On Tue 17 Nov 17:16 CST 2020, Mike Tipton wrote:

> On 11/11/2020 2:07 AM, Georgi Djakov wrote:
> > The following errors are noticed during boot on a QCS404 board:
> > [    2.926647] qcom_icc_rpm_smd_send mas 6 error -6
> > [    2.934573] qcom_icc_rpm_smd_send mas 8 error -6
> > 
> > These errors show when we try to configure the GPU and display nodes,
> > which are defined in the topology, but these hardware blocks actually
> > do not exist on QCS404. According to the datasheet, GPU and display
> > are only present on QCS405 and QCS407.
> 
> Even on QCS405/407 where GPU and display are present, you'd still get these
> errors since these particular nodes aren't supported on RPM and are purely
> local. Instead of removing these we should just change their mas_rpm_id to
> -1. It's harmless to leave them in for QCS404 since they're only used for
> path aggregation. The same code can support all variants of the QCS400
> series. We just wouldn't expect anyone to actually vote these paths on
> QCS404. Similar to how the gcc-qcs404 clock provider still registers the GPU
> and MDP clocks.
> 

That would definitely be preferable and would save us from having 4 (?)
copies of qcs40x...

Regards,
Bjorn

> > 
> > Signed-off-by: Georgi Djakov <georgi.djakov@...aro.org>
> > ---
> >   drivers/interconnect/qcom/qcs404.c | 9 +++------
> >   1 file changed, 3 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/interconnect/qcom/qcs404.c b/drivers/interconnect/qcom/qcs404.c
> > index 9f992422e92f..2ed544e23ff3 100644
> > --- a/drivers/interconnect/qcom/qcs404.c
> > +++ b/drivers/interconnect/qcom/qcs404.c
> > @@ -20,8 +20,6 @@
> >   enum {
> >   	QCS404_MASTER_AMPSS_M0 = 1,
> > -	QCS404_MASTER_GRAPHICS_3D,
> > -	QCS404_MASTER_MDP_PORT0,
> >   	QCS404_SNOC_BIMC_1_MAS,
> >   	QCS404_MASTER_TCU_0,
> >   	QCS404_MASTER_SPDM,
> > @@ -156,8 +154,6 @@ struct qcom_icc_desc {
> >   	}
> >   DEFINE_QNODE(mas_apps_proc, QCS404_MASTER_AMPSS_M0, 8, 0, -1, QCS404_SLAVE_EBI_CH0, QCS404_BIMC_SNOC_SLV);
> > -DEFINE_QNODE(mas_oxili, QCS404_MASTER_GRAPHICS_3D, 8, 6, -1, QCS404_SLAVE_EBI_CH0, QCS404_BIMC_SNOC_SLV);
> > -DEFINE_QNODE(mas_mdp, QCS404_MASTER_MDP_PORT0, 8, 8, -1,
> > QCS404_SLAVE_EBI_CH0, QCS404_BIMC_SNOC_SLV >
> > DEFINE_QNODE(mas_snoc_bimc_1, QCS404_SNOC_BIMC_1_MAS, 8, 76, -1,
> QCS404_SLAVE_EBI_CH0);
> >   DEFINE_QNODE(mas_tcu_0, QCS404_MASTER_TCU_0, 8, -1, -1, QCS404_SLAVE_EBI_CH0, QCS404_BIMC_SNOC_SLV);
> >   DEFINE_QNODE(mas_spdm, QCS404_MASTER_SPDM, 4, -1, -1, QCS404_PNOC_INT_3);
> > @@ -231,8 +227,6 @@ DEFINE_QNODE(slv_lpass, QCS404_SLAVE_LPASS, 4, -1, -1, 0);
> >   static struct qcom_icc_node *qcs404_bimc_nodes[] = {
> >   	[MASTER_AMPSS_M0] = &mas_apps_proc,
> > -	[MASTER_OXILI] = &mas_oxili,
> > -	[MASTER_MDP_PORT0] = &mas_mdp,
> >   	[MASTER_SNOC_BIMC_1] = &mas_snoc_bimc_1,
> >   	[MASTER_TCU_0] = &mas_tcu_0,
> >   	[SLAVE_EBI_CH0] = &slv_ebi,
> > @@ -460,6 +454,9 @@ static int qnoc_probe(struct platform_device *pdev)
> >   	for (i = 0; i < num_nodes; i++) {
> >   		size_t j;
> > +		if (!qnodes[i])
> > +			continue;
> > +
> >   		node = icc_node_create(qnodes[i]->id);
> >   		if (IS_ERR(node)) {
> >   			ret = PTR_ERR(node);
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ