lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOSNQF2_qy51Z01DKO1MB-d+K4EaXGDkof1T4pHNO10U_Hm0WQ@mail.gmail.com>
Date: Tue, 6 Feb 2024 17:22:15 +0530
From: Joy Chakraborty <joychakr@...gle.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: Srinivas Kandagatla <srinivas.kandagatla@...aro.org>, Rob Herring <robh@...nel.org>, 
	Nicolas Saenz Julienne <nsaenz@...nel.org>, linux-kernel@...r.kernel.org, manugautam@...gle.com, 
	stable@...r.kernel.org
Subject: Re: [PATCH v2] nvmem: rmem: Fix return value of rmem_read()

On Tue, Feb 6, 2024 at 4:27 PM Greg Kroah-Hartman
<gregkh@...uxfoundation.org> wrote:
>
> On Tue, Feb 06, 2024 at 04:01:02PM +0530, Joy Chakraborty wrote:
> > On Tue, Feb 6, 2024 at 3:00 PM Greg Kroah-Hartman
> > <gregkh@...uxfoundation.org> wrote:
> > >
> > > On Tue, Feb 06, 2024 at 04:24:08AM +0000, Joy Chakraborty wrote:
> > > > reg_read() callback registered with nvmem core expects an integer error
> > > > as a return value but rmem_read() returns the number of bytes read, as a
> > > > result error checks in nvmem core fail even when they shouldn't.
> > > >
> > > > Return 0 on success where number of bytes read match the number of bytes
> > > > requested and a negative error -EINVAL on all other cases.
> > > >
> > > > Fixes: 5a3fa75a4d9c ("nvmem: Add driver to expose reserved memory as nvmem")
> > > > Cc: stable@...r.kernel.org
> > > > Signed-off-by: Joy Chakraborty <joychakr@...gle.com>
> > > > ---
> > > >  drivers/nvmem/rmem.c | 7 ++++++-
> > > >  1 file changed, 6 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/drivers/nvmem/rmem.c b/drivers/nvmem/rmem.c
> > > > index 752d0bf4445e..a74dfa279ff4 100644
> > > > --- a/drivers/nvmem/rmem.c
> > > > +++ b/drivers/nvmem/rmem.c
> > > > @@ -46,7 +46,12 @@ static int rmem_read(void *context, unsigned int offset,
> > > >
> > > >       memunmap(addr);
> > > >
> > > > -     return count;
> > > > +     if (count != bytes) {
> > > > +             dev_err(priv->dev, "Failed read memory (%d)\n", count);
> > > > +             return -EINVAL;
> > >
> > > Why is a "short read" somehow illegal here?  What internal changes need
> > > to be made now that this has changed?
> >
> > In my opinion "short read" should be illegal for cases where if the
> > nvmem core is unable to read the required size of data to fill up a
> > nvmem cell then data returned might have truncated value.
>
> But that's kind of against what a read() call normally expects.

That is fair, maybe the size check should be there at the nvmem core
layer to catch any truncations but the actual size read is not passed
from provider to the core layer.

>
> > No internal changes should be made since the registered reg_read() is
> > called from  __nvmem_reg_read() which eventually passes on the error
> > code to nvmem_reg_read() whose return code is already checked and
> > passed to nvmem consumers.
> > Currently rmem driver is incorrectly passing a positive value for success.
>
> So this is an internal api issue and not a general issue?  Unwinding the
> read callbacks here is hard.

Yes, this is an internal API issue with how the return type of the
reg_read() function pointer passed to nvmem_register() is handled.
The function prototype in nvmem-provider.h does not define how the
return value is treated in nvmem core.
"
typedef int (*nvmem_reg_read_t)(void *priv, unsigned int offset,
void *val, size_t bytes);
"
Currently it is always checked against 0 for success in nvmem/core.c
which all nvmem-providers adhere to i.e. return 0 on success.
Actual size read from the provider is considered to be equal to the
requested size from core as the provider does not relay that
information.

>
> Also, in looking at the code, how can this ever be a short read?  You
> are using memory_read_from_buffer() which unless the values passed into
> it are incorrect, will always return the expected read amount.
>

Correct, we will have an error only if the returned value from
memory_read_from_buffer() is negative.
So to work with the current nvmem core implementation, I can return
the value as is if negative and 0 for positive.

> > > And what will userspace do with this error message in the kernel log?
> >
> > User space currently is not seeing this error for nvmem device/eeprom
> > reads due to the following code at nvmem/core.c in
> > bin_attr_nvmem_read():
> > "
> >     rc = nvmem_reg_read(nvmem, pos, buf, count);
> >
> >     if (rc)
> >         return rc;
> >
> >     return count;
> > "
> > since it expects to return the number of bytes.
> >
> > Userspace will see a false error with nvmem cell reads from
> > nvmem_cell_attr_read() in current code, which should be fixed on
> > returning 0 for success.
>
> So maybe fix this all up to allow the read to return the actual amount
> read?  That feels more "correct" to me.
>

If I change the behavior of the nvmem_reg_read_t callback to negative
for error and number of bytes actually read for success then, other
than the core driver I would also have to change all the
nvmem-provider drivers.
Is it okay to do so ?

> thanks,
>
> greg k-h

Thanks
Joy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ