[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <feff8e48-c974-447a-99bb-21d5beda1dd1@linaro.org>
Date: Wed, 7 Feb 2024 09:30:27 +0000
From: Srinivas Kandagatla <srinivas.kandagatla@...aro.org>
To: Joy Chakraborty <joychakr@...gle.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Rob Herring <robh@...nel.org>, Nicolas Saenz Julienne <nsaenz@...nel.org>,
linux-kernel@...r.kernel.org, manugautam@...gle.com, stable@...r.kernel.org
Subject: Re: [PATCH v2] nvmem: rmem: Fix return value of rmem_read()
On 07/02/2024 06:35, Joy Chakraborty wrote:
> On Wed, Feb 7, 2024 at 4:06 AM Srinivas Kandagatla
> <srinivas.kandagatla@...aro.org> wrote:
>>
>>
>>
>> On 06/02/2024 04:24, Joy Chakraborty wrote:
>>> reg_read() callback registered with nvmem core expects an integer error
>>> as a return value but rmem_read() returns the number of bytes read, as a
>>> result error checks in nvmem core fail even when they shouldn't.
>>>
>>> Return 0 on success where number of bytes read match the number of bytes
>>> requested and a negative error -EINVAL on all other cases.
>>>
>>> Fixes: 5a3fa75a4d9c ("nvmem: Add driver to expose reserved memory as nvmem")
>>> Cc: stable@...r.kernel.org
>>> Signed-off-by: Joy Chakraborty <joychakr@...gle.com>
>>> ---
>>> drivers/nvmem/rmem.c | 7 ++++++-
>>> 1 file changed, 6 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/nvmem/rmem.c b/drivers/nvmem/rmem.c
>>> index 752d0bf4445e..a74dfa279ff4 100644
>>> --- a/drivers/nvmem/rmem.c
>>> +++ b/drivers/nvmem/rmem.c
>>> @@ -46,7 +46,12 @@ static int rmem_read(void *context, unsigned int offset,
>>>
>>> memunmap(addr);
>>>
>>> - return count;
>>> + if (count != bytes) {
>>
>> How can this fail unless the values set in priv->mem->size is incorrect
>>
>
> That should be correct since it would be fetched from the reserved
> memory definition in the device tree.
>
>> Only case I see this failing with short reads is when offset cross the
>> boundary of priv->mem->size.
>>
>>
>> can you provide more details on the failure usecase, may be with actual
>> values of offsets, bytes and priv->mem->size?
>>
>
> This could very well happen if a fixed-layout defined for the reserved
> memory has a cell which defines an offset and size greater than the
> actual size of the reserved mem.
No that should just be blocked from core layer, atleast which is what is
checked bin_attr_nvmem_read(), if checks are missing in other places
then that needs fixing.
> For E.g. if the device tree node is as follows
> reserved-memory {
> #address-cells = <1>;
> #size-cells = <1>;
> ranges;
> nvmem@...0 {
> compatible = "nvmem-rmem";
> reg = <0x1000 0x400>;
> no-map;
> nvmem-layout {
> compatible = "fixed-layout";
> #address-cells = <1>;
> #size-cells = <1>;
> calibration@...f {
> reg = <0x13ff 0x2>;
this is out of range, core should just err out.
--srini
> };
> };
> };
> };
> If we try to read the cell "calibration" which crosses the boundary of
> the reserved memory then it will lead to a short read.
> Though, one might argue that the protection against such cell
> definition should be there during fixed-layout parsing in core itself
> but that is not there now and would not be a fix.
>
> What I am trying to fix here is not exactly short reads but how the
> return value of rmem_read() is treated by the nvmem core, where it
> treats a non-zero return from read as an error currently. Hence
> returning the number of bytes read leads to false failures if we try
> to read a cell.
>
>
>>
>>> + dev_err(priv->dev, "Failed read memory (%d)\n", count);
>>> + return -EINVAL;
>>> + }
>>> +
>>
>>> + return 0;
>>
>> thanks,
>> srini
>>
>>> }
>>>
>>> static int rmem_probe(struct platform_device *pdev)
Powered by blists - more mailing lists