[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <498b8759-1a70-d80f-3a4d-39042b4f608e@gmail.com>
Date: Wed, 21 Apr 2021 08:17:28 -0700
From: Florian Fainelli <f.fainelli@...il.com>
To: Quentin Perret <qperret@...gle.com>
Cc: Ard Biesheuvel <ardb@...nel.org>, Rob Herring <robh+dt@...nel.org>,
Alexandre TORGUE <alexandre.torgue@...s.st.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Sasha Levin <sashal@...nel.org>,
stable <stable@...r.kernel.org>, Arnd Bergmann <arnd@...db.de>,
"open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS"
<devicetree@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Nicolas Boichat <drinkcat@...omium.org>,
Stephen Boyd <swboyd@...omium.org>,
KarimAllah Ahmed <karahmed@...zon.de>,
Android Kernel Team <kernel-team@...roid.com>,
Architecture Mailman List <boot-architecture@...ts.linaro.org>,
Frank Rowand <frowand.list@...il.com>,
linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [v5.4 stable] arm: stm32: Regression observed on "no-map"
reserved memory region
On 4/21/2021 7:33 AM, Florian Fainelli wrote:
>
>
> On 4/21/2021 1:31 AM, Quentin Perret wrote:
>> On Tuesday 20 Apr 2021 at 09:33:56 (-0700), Florian Fainelli wrote:
>>> I do wonder as well, we have a 32MB "no-map" reserved memory region on
>>> our platforms located at 0xfe000000. Without the offending commit,
>>> /proc/iomem looks like this:
>>>
>>> 40000000-fdffefff : System RAM
>>> 40008000-40ffffff : Kernel code
>>> 41e00000-41ef1d77 : Kernel data
>>> 100000000-13fffffff : System RAM
>>>
>>> and with the patch applied, we have this:
>>>
>>> 40000000-fdffefff : System RAM
>>> 40008000-40ffffff : Kernel code
>>> 41e00000-41ef3db7 : Kernel data
>>> fdfff000-ffffffff : System RAM
>>> 100000000-13fffffff : System RAM
>>>
>>> so we can now see that the region 0xfe000000 - 0xfffffff is also cobbled
>>> up with the preceding region which is a mailbox between Linux and the
>>> secure monitor at 0xfdfff000 and of size 4KB. It seems like there is
>>>
>>> The memblock=debug outputs is also different:
>>>
>>> [ 0.000000] MEMBLOCK configuration:
>>> [ 0.000000] memory size = 0xfdfff000 reserved size = 0x7ce4d20d
>>> [ 0.000000] memory.cnt = 0x2
>>> [ 0.000000] memory[0x0] [0x00000040000000-0x000000fdffefff],
>>> 0xbdfff000 bytes flags: 0x0
>>> [ 0.000000] memory[0x1] [0x00000100000000-0x0000013fffffff],
>>> 0x40000000 bytes flags: 0x0
>>> [ 0.000000] reserved.cnt = 0x6
>>> [ 0.000000] reserved[0x0] [0x00000040003000-0x0000004000e494],
>>> 0xb495 bytes flags: 0x0
>>> [ 0.000000] reserved[0x1] [0x00000040200000-0x00000041ef1d77],
>>> 0x1cf1d78 bytes flags: 0x0
>>> [ 0.000000] reserved[0x2] [0x00000045000000-0x000000450fffff],
>>> 0x100000 bytes flags: 0x0
>>> [ 0.000000] reserved[0x3] [0x00000047000000-0x0000004704ffff],
>>> 0x50000 bytes flags: 0x0
>>> [ 0.000000] reserved[0x4] [0x000000c2c00000-0x000000fdbfffff],
>>> 0x3b000000 bytes flags: 0x0
>>> [ 0.000000] reserved[0x5] [0x00000100000000-0x0000013fffffff],
>>> 0x40000000 bytes flags: 0x0
>>>
>>> [ 0.000000] MEMBLOCK configuration:
>>> [ 0.000000] memory size = 0x100000000 reserved size = 0x7ca4f24d
>>> [ 0.000000] memory.cnt = 0x3
>>> [ 0.000000] memory[0x0] [0x00000040000000-0x000000fdffefff],
>>> 0xbdfff000 bytes flags: 0x0
>>> [ 0.000000] memory[0x1] [0x000000fdfff000-0x000000ffffffff],
>>> 0x2001000 bytes flags: 0x4
>>> [ 0.000000] memory[0x2] [0x00000100000000-0x0000013fffffff],
>>> 0x40000000 bytes flags: 0x0
>>> [ 0.000000] reserved.cnt = 0x6
>>> [ 0.000000] reserved[0x0] [0x00000040003000-0x0000004000e494],
>>> 0xb495 bytes flags: 0x0
>>> [ 0.000000] reserved[0x1] [0x00000040200000-0x00000041ef3db7],
>>> 0x1cf3db8 bytes flags: 0x0
>>> [ 0.000000] reserved[0x2] [0x00000045000000-0x000000450fffff],
>>> 0x100000 bytes flags: 0x0
>>> [ 0.000000] reserved[0x3] [0x00000047000000-0x0000004704ffff],
>>> 0x50000 bytes flags: 0x0
>>> [ 0.000000] reserved[0x4] [0x000000c3000000-0x000000fdbfffff],
>>> 0x3ac00000 bytes flags: 0x0
>>> [ 0.000000] reserved[0x5] [0x00000100000000-0x0000013fffffff],
>>> 0x40000000 bytes flags: 0x0
>>>
>>> in the second case we can clearly see that the 32MB no-map region is now
>>> considered as usable RAM.
>>>
>>> Hope this helps.
>>>
>>>>
>>>> In any case, the mere fact that this causes a regression should be
>>>> sufficient justification to revert/withdraw it from v5.4, as I don't
>>>> see a reason why it was merged there in the first place. (It has no
>>>> fixes tag or cc:stable)
>>>
>>> Agreed, however that means we still need to find out whether a more
>>> recent kernel is also broken, I should be able to tell you that a little
>>> later.
>>
>> FWIW I did test this on Qemu before posting. With 5.12-rc8 and a 1MiB
>> no-map region at 0x80000000, I have the following:
>>
>> 40000000-7fffffff : System RAM
>> 40210000-417fffff : Kernel code
>> 41800000-41daffff : reserved
>> 41db0000-4210ffff : Kernel data
>> 48000000-48008fff : reserved
>> 80000000-800fffff : reserved
>> 80100000-13fffffff : System RAM
>> fa000000-ffffffff : reserved
>> 13b000000-13f5fffff : reserved
>> 13f6de000-13f77dfff : reserved
>> 13f77e000-13f77efff : reserved
>> 13f77f000-13f7dafff : reserved
>> 13f7dd000-13f7defff : reserved
>> 13f7df000-13f7dffff : reserved
>> 13f7e0000-13f7f3fff : reserved
>> 13f7f4000-13f7fdfff : reserved
>> 13f7fe000-13fffffff : reserved
>>
>> If I remove the 'no-map' qualifier from DT, I get this:
>>
>> 40000000-13fffffff : System RAM
>> 40210000-417fffff : Kernel code
>> 41800000-41daffff : reserved
>> 41db0000-4210ffff : Kernel data
>> 48000000-48008fff : reserved
>> 80000000-800fffff : reserved
>> fa000000-ffffffff : reserved
>> 13b000000-13f5fffff : reserved
>> 13f6de000-13f77dfff : reserved
>> 13f77e000-13f77efff : reserved
>> 13f77f000-13f7dafff : reserved
>> 13f7dd000-13f7defff : reserved
>> 13f7df000-13f7dffff : reserved
>> 13f7e0000-13f7f3fff : reserved
>> 13f7f4000-13f7fdfff : reserved
>> 13f7fe000-13fffffff : reserved
>>
>> So this does seem to be working fine on my setup. I'll try again with
>> 5.4 to see if I can repro.
>>
>> Also, 8a5a75e5e9e5 ("of/fdt: Make sure no-map does not remove already
>> reserved regions") looks more likely to cause the issue observed here,
>> but that shouldn't be silent. I get the following error message in dmesg
>> if I if place the no-map region on top of the kernel image:
>>
>> OF: fdt: Reserved memory: failed to reserve memory for node 'foobar@...10000': base 0x0000000040210000, size 1 MiB
>>
>> Is that triggering on your end?
>
> It is not, otherwise I would have noticed earlier, can you try the same
> thing that happens on my platform with a reserved region (without
> no-map) adjacent to a reserved region with 'no-map'? I will test
> different and newer kernels than 5.4 today to find out if this is still
> a problem with upstream. I could confirm that v4.9.259 also have this
> problem now.
5.10.31 works correctly and shows the following for my platform:
40000000-fdffefff : System RAM
40200000-40eaffff : Kernel code
40eb0000-4237ffff : reserved
42380000-425affff : Kernel data
45000000-450fffff : reserved
47000000-4704ffff : reserved
4761e000-47624fff : reserved
f8c00000-fdbfffff : reserved
fdfff000-ffffffff : reserved
100000000-13fffffff : System RAM
13b000000-13effffff : reserved
13f114000-13f173fff : reserved
13f174000-13f774fff : reserved
13f775000-13f7e8fff : reserved
13f7eb000-13f7ecfff : reserved
13f7ed000-13f7effff : reserved
13f7f0000-13fffffff : reserved
--
Florian
Powered by blists - more mailing lists