lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e11e19ce-18e0-1fe9-8eda-aa12f8c87a73@quicinc.com>
Date: Thu, 21 Nov 2024 11:36:29 +0530
From: Md Sadre Alam <quic_mdalam@...cinc.com>
To: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
CC: <miquel.raynal@...tlin.com>, <richard@....at>, <vigneshr@...com>,
        <linux-mtd@...ts.infradead.org>, <linux-arm-msm@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <quic_srichara@...cinc.com>,
        <quic_nainmeht@...cinc.com>, <quic_laksd@...cinc.com>,
        <quic_varada@...cinc.com>
Subject: Re: [PATCH 2/2] mtd: rawnand: qcom: Fix onfi param page read



On 11/20/2024 12:36 PM, Manivannan Sadhasivam wrote:
> On Tue, Nov 19, 2024 at 02:50:58PM +0530, Md Sadre Alam wrote:
>> For QPIC V2 onwards there is a separate register to read
>> last code word "QPIC_NAND_READ_LOCATION_LAST_CW_n".
>>
>> qcom_param_page_type_exec() is used to read only one code word
>> If we will configure number of code words to 1 in QPIC_NAND_DEV0_CFG0
> 
> No 'we' in commit message. Also use imperative tone.
Ok
> 
>> register then QPIC controller thinks its reading the last code word,
>> since we are having separate register to read the last code word,
>> we have to configure "QPIC_NAND_READ_LOCATION_LAST_CW_n" register
>> to fetch data from QPIC buffer to system memory.
>>
>> Also there is minimum size to fetch the data from device to QPIC buffer
>> is 512-bytes. If size is less than 512-bytes the data will not be
>> protected by ECC as per QPIC standard. So while reading onfi parameter
>> page from NAND device setting nandc->buf_count = 512.
>>
> 
> This is a separate fix and should be in a separate patch.
Ok
> 
>> Fixes: 89550beb098e ("mtd: rawnand: qcom: Implement exec_op()")
> 
> Please describe the impact of the issue. Add relevant failure messages, affected
> SoC names etc...
Sure, Will update in next revision.
> 
> Finally, you should also CC stable list to backport the fixes.
Ok
> 
> - Mani
> 
>> Signed-off-by: Md Sadre Alam <quic_mdalam@...cinc.com>
>> ---
>>   drivers/mtd/nand/raw/qcom_nandc.c | 14 +++++++++++---
>>   1 file changed, 11 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
>> index 34ee8555fb8a..6487f2126833 100644
>> --- a/drivers/mtd/nand/raw/qcom_nandc.c
>> +++ b/drivers/mtd/nand/raw/qcom_nandc.c
>> @@ -2859,7 +2859,12 @@ static int qcom_param_page_type_exec(struct nand_chip *chip,  const struct nand_
>>   	const struct nand_op_instr *instr = NULL;
>>   	unsigned int op_id = 0;
>>   	unsigned int len = 0;
>> -	int ret;
>> +	int ret, reg_base;
>> +
>> +	reg_base = NAND_READ_LOCATION_0;
>> +
>> +	if (nandc->props->qpic_v2)
>> +		reg_base = NAND_READ_LOCATION_LAST_CW_0;
>>   
>>   	ret = qcom_parse_instructions(chip, subop, &q_op);
>>   	if (ret)
>> @@ -2911,14 +2916,17 @@ static int qcom_param_page_type_exec(struct nand_chip *chip,  const struct nand_
>>   	op_id = q_op.data_instr_idx;
>>   	len = nand_subop_get_data_len(subop, op_id);
>>   
>> -	nandc_set_read_loc(chip, 0, 0, 0, len, 1);
>> +	if (nandc->props->qpic_v2)
>> +		nandc_set_read_loc_last(chip, reg_base, 0, len, 1);
>> +	else
>> +		nandc_set_read_loc_first(chip, reg_base, 0, len, 1);
>>   
>>   	if (!nandc->props->qpic_v2) {
>>   		write_reg_dma(nandc, NAND_DEV_CMD_VLD, 1, 0);
>>   		write_reg_dma(nandc, NAND_DEV_CMD1, 1, NAND_BAM_NEXT_SGL);
>>   	}
>>   
>> -	nandc->buf_count = len;
>> +	nandc->buf_count = 512;
>>   	memset(nandc->data_buffer, 0xff, nandc->buf_count);
>>   
>>   	config_nand_single_cw_page_read(chip, false, 0);
>> -- 
>> 2.34.1
>>
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ