lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 9 Sep 2022 16:36:59 -0700
From:   Julius Werner <jwerner@...omium.org>
To:     Krzysztof Kozlowski <krzysztof.kozlowski@...aro.org>
Cc:     Julius Werner <jwerner@...omium.org>,
        Rob Herring <robh+dt@...nel.org>,
        Dmitry Osipenko <digetx@...il.com>,
        Doug Anderson <dianders@...omium.org>,
        Jian-Jia Su <jjsu@...gle.com>,
        "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS" 
        <devicetree@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/4] dt-bindings: memory: Add jedec,lpddrX-channel binding

> > Well, stacking in parallel just means you have more of them? In the
> > original example, you have a single LPDDR chip with two ranks, one
> > 4Gbit rank and one 2Gbit rank. That chip is directly hooked up to the
> > LPDDR controller and that's the only chip you have, so you have 4+2 =
> > 6Gbit total memory in the system.
> >
> > In your next example, the LPDDR controller has a 64 bit wide channel,
> > but you're still using that same 6Gbit LPDDR chip that only has 32 DQ
> > pins. The only way to fill out that 64 bit channel with this kind of
> > chip is to have two of them in parallel (one connected to DQ[0:31] and
> > one connected to DQ[32:63]). So we infer from the mismatch in io-width
> > that we have two chips. Each chip still has 6Gbit of memory, so the
> > total system would have 12Gbit.
>
> Two chips so more device nodes? Since there are no DTSes with it, please
> provide an additional example in the bindings.

No, there isn't a separate node for each chip in this case. There's
still only one node (per rank), but if the io-width of the rank node
is smaller than the io-width of the channel node, that implicitly
indicates that there are in fact multiple chips of the same type wired
in parallel on that channel. I tried to explain this in the
description for the channel's io-width property.

I chose to model it this way because having separate nodes for each
chip would be redundant since all their properties have to be equal
anyway, and because it more closely resembles the way this looks to
the firmware and the DDR controller. The DDR controller doesn't
actually "see" that there are multiple separate chips and cannot
enumerate them as individual entities, because only the DQ pins are
split among the different chips -- all other pins like chip select and
column address are shorted together between all the parallel chips,
and mode register values are only returned through the lowest DQ pins
(DQ[7:0]). So it's impossible for the DDR controller to read mode
register values from the other chips, it can only read them from the
first chip and it must trust that all the other chips are the exact
same part number, because that's the only valid way to wire this (and
when the controller writes timing configuration to the mode registers,
the same value is written out to all chips at once via the shorted
column address pins).

My example does contain this case already, in lpddr-channel0, rank@0:
there's only one rank node with density 8Gbits, but since that node
has io-width 16 and the channel has io-width 32, it is implied that
there are actually two single-rank chips wired in parallel on this
channel, and since each of them have 8Gbits of memory the channel has
16Gbits in total.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ