[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221021193248.2he6amnj7knk4biu@core>
Date: Fri, 21 Oct 2022 21:32:48 +0200
From: Ondřej Jirman <megi@....cz>
To: Peter Geis <pgwipeout@...il.com>
Cc: Heiko Stuebner <heiko@...ech.de>,
linux-rockchip@...ts.infradead.org,
Rob Herring <robh+dt@...nel.org>,
Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>,
Michael Riesch <michael.riesch@...fvision.net>,
Nicolas Frattaroli <frattaroli.nicolas@...il.com>,
Sascha Hauer <s.hauer@...gutronix.de>,
Frank Wunderlich <frank-w@...lic-files.de>,
Ezequiel Garcia <ezequiel@...guardiasur.com.ar>,
Yifeng Zhao <yifeng.zhao@...k-chips.com>,
Johan Jonker <jbx6244@...il.com>,
"open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS"
<devicetree@...r.kernel.org>,
"moderated list:ARM/Rockchip SoC support"
<linux-arm-kernel@...ts.infradead.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] arm64: dts: rockchip: rk356x: Fix PCIe register map
and ranges
On Fri, Oct 21, 2022 at 12:48:15PM -0400, Peter Geis wrote:
> On Fri, Oct 21, 2022 at 11:39 AM Ondřej Jirman <megi@....cz> wrote:
> >
> > On Fri, Oct 21, 2022 at 09:07:50AM -0400, Peter Geis wrote:
> > > Good Morning Heiko,
> > >
> > > Apologies for just getting to this, I'm still in the middle of moving
> > > and just got my lab set back up.
> > >
> > > I've tested this patch series and it leads to the same regression with
> > > NVMe drives. A loop of md5sum on two identical 4GB random files
> > > produces the following results:
> > > d11cf0caa541b72551ca22dc5bef2de0 test-rand.img
> > > fad97e91da8d4fd554c895cafa89809b test-rand2.img
> > > 2d56a7baa05c38535f4c19a2b371f90a test-rand.img
> > > 74e8e6f93d7c3dc3ad250e91176f5901 test-rand2.img
> > > 25cfcfecf4dd529e4e9fbbe2be482053 test-rand.img
> > > 74e8e6f93d7c3dc3ad250e91176f5901 test-rand2.img
> > > b9637505bf88ed725f6d03deb7065dab test-rand.img
> > > f7437e88d524ea92e097db51dce1c60d test-rand2.img
> > >
> > > Before this patch series:
> > > d11cf0caa541b72551ca22dc5bef2de0 test-rand.img
> > > d11cf0caa541b72551ca22dc5bef2de0 test-rand2.img
> > > d11cf0caa541b72551ca22dc5bef2de0 test-rand.img
> > > d11cf0caa541b72551ca22dc5bef2de0 test-rand2.img
> > > d11cf0caa541b72551ca22dc5bef2de0 test-rand.img
> > > d11cf0caa541b72551ca22dc5bef2de0 test-rand2.img
> > > d11cf0caa541b72551ca22dc5bef2de0 test-rand.img
> > > d11cf0caa541b72551ca22dc5bef2de0 test-rand2.img
> > >
> > > Though I do love where this patch is going and would like to see if it
> > > can be made to work, in its current form it does not.
> >
> > Thanks for the test. Can you please also test v1? Also please share lspci -vvv
> > of your nvme drive, so that we can see allocated address ranges, etc.
>
> Good catch, with your patch as is, the following issue crops up:
> Region 0: Memory at 300000000 (64-bit, non-prefetchable) [size=16K]
> Region 2: I/O ports at 1000 [disabled] [size=256]
>
> However, with a simple fix, we can get this:
> Region 0: Memory at 300000000 (64-bit, non-prefetchable) [virtual] [size=16K]
> Region 2: I/O ports at 1000 [virtual] [size=256]
>
> and with it a working NVMe drive.
>
> Change the following range:
> 0x02000000 0x0 0x40000000 0x3 0x00000000 0x0 0x40000000>;
> to
> 0x02000000 0x0 0x00000000 0x3 0x00000000 0x0 0x40000000>;
I've already tried this, but this unfrotunately breaks the wifi cards.
(those only use the I/O space) Maybe because I/O and memory address spaces
now overlap, I don't know. That's why I used the 1GiB offset for memory
space.
kind regards,
o.
> I still haven't tested this with other cards yet, and another patch
> that does similar work I've tested successfully as well with NVMe
> drives. I'll have to get back to you on the results of greater
> testing.
>
> Very Respectfully,
> Peter Geis
>
> >
> > kind regards,
> > o.
> >
> > > Very Respectfully,
> > > Peter Geis
Powered by blists - more mailing lists