lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 2 Jun 2024 08:25:50 -0700
From: Guenter Roeck <linux@...ck-us.net>
To: Thomas Weißschuh <linux@...ssschuh.net>,
 Armin Wolf <W_Armin@....de>
Cc: linux-hwmon@...r.kernel.org, devicetree@...r.kernel.org,
 Rob Herring <robh@...nel.org>, Krzysztof Kozlowski <krzk+dt@...nel.org>,
 Conor Dooley <conor+dt@...nel.org>, linux-kernel@...r.kernel.org,
 René Rebe <rene@...ctcode.de>,
 Wolfram Sang <wsa+renesas@...g-engineering.com>
Subject: Re: [PATCH RFT v3 4/4] hwmon: (spd5118) Add support for reading SPD
 data

On 6/2/24 00:55, Thomas Weißschuh wrote:
> On 2024-06-01 21:23:24+0000, Armin Wolf wrote:
>> Am 01.06.24 um 16:08 schrieb Thomas Weißschuh:
>>
>>> On 2024-06-01 06:48:29+0000, Guenter Roeck wrote:
>>>
>>> <snip>
>>>
>>>> Makes sense. Another question:
>>>>
>>>> This:
>>>>
>>>> +        struct nvmem_config nvmem_config = {
>>>> +               .type = NVMEM_TYPE_EEPROM,
>>>> +               .name = dev_name(dev),
>>>> +               .id = NVMEM_DEVID_AUTO,
>>>>
>>>> results in:
>>>>
>>>> $ ls /sys/bus/nvmem/devices
>>>> 0-00501  0-00512  0-00523  0-00534  cmos_nvram0
>>>> ^^^^^^^  ^^^^^^^  ^^^^^^^  ^^^^^^^
>>>>
>>>> which really doesn't look good. My current plan is to go with NVMEM_DEVID_NONE,
>>>> which results in
>>>>
>>>> $ ls /sys/bus/nvmem/devices
>>>> 0-0050	0-0051	0-0052	0-0053	cmos_nvram0
>>>>
>>>> We could also used fixed strings, but "spd" results in "spd[1-4]" which
>>>> I think would be a bit misleading since the DDR3/4 SPD data format is
>>>> different, and "spd5118" would result in "spd5118[1-4]" which again would
>>>> look odd. Any suggestions ?
>>> In order of descending, personal preference:
>>>
>>> * spd-ddr5-[0-3] (.id = client->address - 0x50)
>>
>> Hi,
>>
>> this will break as soon as more than 8 DDR5 DIMMs are installed.
> 
> i2c_register_spd() only handles 8 DIMMs, too.
> JESD 300-5B.01 (section 2.6.5) also defines i2c addresses for 8 DIMMS only.
> 
> Outside of that range we could fall back to something else.
> 
>>> * spd-ddr5-[0-3] (NVMEM_DEVID_AUTO)
>>> * Same with only "ddr5-"
>>> * spd5118-[0-3]
>>> * Your proposal from above
>>> * nvmem[0-3] (default handling)
>>> * 0-0050-[0-3]
>>>
>>> Also can't a user of the eeprom already figure out which kind of module
>>> it is by looking at the eeprom contents?
>>> The first few bytes used for that seem to be compatible between at least
>>> DDR4 and DDR5.
>>>
>>> So using plain spd[1-4] could be enough.
>>
>> This could cause problems when DDR6 arrives.
>> Personally i would prefer the spd5118-X (NVMEM_DEVID_AUTO) format.
> 
> I have the impression that the eeprom layouts are designed to be
> forward and backward compatible.
> 
> If a non-DDR5-aware parser reads the contents of a DDR5 eeprom it will
> fail the CRC check, so there can be no accidental misinterpretation.
> (Because the CRC'ed area is larger and the CRC is at another location)
> 
> On the other hand the first bytes of DDR4 and DDR5 are compatible, so
> even an unaware parser can recognize that a SPD eeprom is being read and
> which DIMM type and specification revision it is.
> 
> This seems intentional and therefore should also hold true for DDR5 to DDR6.
>

Looking into how this is handled by other drivers:

- at24 generates directories named {bus}-005{0-7}X, where X is from NVMEM_DEVID_AUTO.
   Alternatively, it uses the 'label' devicetree property. In that case, the name
   will be <label>X, with X again determined by NVMEM_DEVID_AUTO.
   It does that to prevent duplicate file names due to duplicate labels.

- ee1004 does not use the nvmem subsystem, and thus there will be no
   entries in /sys/bus/nvmem/devices/.

NVMEM_DEVID_AUTO counts up from 0, and affects every nvmem device. That means
the assigned ID is not fixed but simply reflects the n-th device using it,
in the order of registration. Effectively this means that any fixed name
plus NVMEM_DEVID_AUTO can not be associated with the originating device,
and there would be no guarantee that it was static (meaning it could change
from boot to boot). spd5118-X would not mean the Xth DIMM, it would be the
Xth device registering with the nvmem subsystem.

At the same time, something like spd5118-X, with X derived only from the
i2c address, would not work large systems because there could be DIMMs on
multiple I2C busses. X would have to derived from the bus number plus
the I2C address, such as '(bus << 3) | (address & 7)'. Even that would
not be static since the bus number could change from boot to boot
depending on the i2c bus instantiation order. It would also require extra
code since the name would have to be generated in the driver.

In the context of at24.c not really caring and ee1004.c not even using
the nvmem subsystem, I think it doesn't really matter how the nvmem device
subdirectories are named as long as they are unique. decode-dimms won't
use it, and I guess no one else really cares because, after all, ee1004
doesn't even support /sys/bus/nvmem/ in the first place and at24 uses the
odd/unusual form of {bus}-005{0-7}X.

Given all that, I'll just stick with the simple dev_name(dev) and
NVMEM_DEVID_NONE. That creates a clear association to i2c bus and address
without requiring extra code or a lengthy explanation how the index is
generated while at the same time guaranteeing uniqueness. I _could_ add
code to use the devicetree label if provided, but that would require using
the same mechanism as used by at24, i.e., we'd end up with {bus}-005{0-7}X
default names (after all, someone could provide duplicate labels or labels
such as "0-0050"). If that is deemed useful it could be added later.

Thanks,
Guenter


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ