lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 5 Mar 2021 16:07:41 +0000
From:   Srinivas Kandagatla <srinivas.kandagatla@...aro.org>
To:     Doug Anderson <dianders@...omium.org>
Cc:     Rob Clark <robdclark@...il.com>,
        Jordan Crouse <jcrouse@...eaurora.org>,
        Niklas Cassel <niklas.cassel@...aro.org>,
        Ulf Hansson <ulf.hansson@...aro.org>,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        Stephen Boyd <swboyd@...omium.org>,
        linux-arm-msm <linux-arm-msm@...r.kernel.org>,
        Akhil P Oommen <akhilpo@...eaurora.org>,
        Jorge Ramirez-Ortiz <jorge.ramirez-ortiz@...aro.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/3] nvmem: core: Allow nvmem_cell_read_u16/32/64 to read
 smaller cells



On 05/03/2021 14:58, Doug Anderson wrote:
> Hi,
> 
> On Fri, Mar 5, 2021 at 2:27 AM Srinivas Kandagatla
> <srinivas.kandagatla@...aro.org> wrote:
>>
>>
>>
>> On 27/02/2021 00:26, Douglas Anderson wrote:
>>> The current way that cell "length" is specified for nvmem cells is a
>>> little fuzzy. For instance, let's look at the gpu speed bin currently
>>> in sc7180.dtsi:
>>>
>>>     gpu_speed_bin: gpu_speed_bin@1d2 {
>>>       reg = <0x1d2 0x2>;
>>>       bits = <5 8>;
>>>     };
>>>
>>> This is an 8-bit value (as specified by the "bits" field). However,
>>> it has a "length" of 2 (bytes), presumably because the value spans
>>> across two bytes.
>>>
>>> When querying this value right now, it's hard for a client to know if
>>> they should be calling nvmem_cell_read_u16() or nvmem_cell_read_u8().
>>> Today they must call nvmem_cell_read_u16() because the "length" of the
>>> cell was 2 (bytes). However, if a later SoC ever came around and
>>> didn't span across 2 bytes it would be unclear.  If a later Soc
>>> specified, for instance:
>>>
>>>     gpu_speed_bin: gpu_speed_bin@100 {
>>>       reg = <0x100 0x1>;
>>>       bits = <0 8>;
>>>     };
>>>
>>> ...then the caller would need to change to try calling
>>> nvmem_cell_read_u8() because the u16 version would fail.
>>>
>>
>> If the consumer driver is expecting the sizes to span around byte to
>> many bytes
> 
> I guess in my mind that's outside of the scope of what the consumer
> should need to know.  The consumer wants a number and they know it's
> stored in nvmem.  They shouldn't need to consider the bit packing
> within nvmem.  Imagine that have a structure definition:
> 
> struct example {
>    int num1:6;
>    int num2:6;
>    int num3:6;
>    int num4:6;
> };
> struct example e;
> 
> What I think you're saying is that you should need a different syntax
> for accessing "e.num1" and "e.num4" (because they happen not to span
> bytes) compared to accessing "e.num2" and "e.num3". As it is, C
> abstracts this out and allows you not to care. You can just do:
> 
> e.num1 + e.num2 + e.num3 + e.num4
> 
> ...and it works fine even though some of those span bytes and some
> don't.  I want the same thing.
> 
> 
>> , then, Why not just call nvmem_cell_read() which should also
>> return you how many bytes it has read!
> 
> See my response to patch #1. This requires open-coding a small but
> still non-trivial bit of code for all consumers. It should be in the
> core.

I agree with that this should be in core!
But changing the exiting behavior of the apis is the one am against!
For example if we are reading a fixed size UUID or some cell like that 
we would want to validate it, allowing flexible sizes would not catch 
errors.
Also if its variable size then which apis should consumer use, should he 
use u32 or u16 based, this adds more confusion to this!

> 
> 
>>> Let's solve this by allowing clients to read a "larger" value. We'll
>>> just fill it in with 0.
>>
>> That is misleading the consumer! If the consumer is expecting a u16 or
>> u32, cell size should be of that size!!
> 
> If you think it's confusing to change the behavior of the existing
> functions, would you be opposed to me adding a new function like
> nvmem_cell_read_le_u32_or_smaller() (or provide me a better name) that
> would be flexible like this?

This should be perfectly okay!
may be something like:

int nvmem_read_variable_cell(struct device *dev, const char *cell_id, 
void *buf, size_t sz_min, size_t sz_max);

It should return number of bytes it read and fail if cell size is less 
then sz_min!

--srini
> 
> -Doug
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ