[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200513094416.347394093@linuxfoundation.org>
Date: Wed, 13 May 2020 11:45:02 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Vincent Chen <vincent.chen@...ive.com>,
Anup Patel <anup@...infault.org>,
Yash Shah <yash.shah@...ive.com>,
Palmer Dabbelt <palmerdabbelt@...gle.com>
Subject: [PATCH 5.4 66/90] riscv: set max_pfn to the PFN of the last page
From: Vincent Chen <vincent.chen@...ive.com>
commit c749bb2d554825e007cbc43b791f54e124dadfce upstream.
The current max_pfn equals to zero. In this case, I found it caused users
cannot get some page information through /proc such as kpagecount in v5.6
kernel because of new sanity checks. The following message is displayed by
stress-ng test suite with the command "stress-ng --verbose --physpage 1 -t
1" on HiFive unleashed board.
# stress-ng --verbose --physpage 1 -t 1
stress-ng: debug: [109] 4 processors online, 4 processors configured
stress-ng: info: [109] dispatching hogs: 1 physpage
stress-ng: debug: [109] cache allocate: reducing cache level from L3 (too high) to L0
stress-ng: debug: [109] get_cpu_cache: invalid cache_level: 0
stress-ng: info: [109] cache allocate: using built-in defaults as no suitable cache found
stress-ng: debug: [109] cache allocate: default cache size: 2048K
stress-ng: debug: [109] starting stressors
stress-ng: debug: [109] 1 stressor spawned
stress-ng: debug: [110] stress-ng-physpage: started [110] (instance 0)
stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd34de000 in /proc/kpagecount, errno=0 (Success)
stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd32db078 in /proc/kpagecount, errno=0 (Success)
...
stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd32db078 in /proc/kpagecount, errno=0 (Success)
stress-ng: debug: [110] stress-ng-physpage: exited [110] (instance 0)
stress-ng: debug: [109] process [110] terminated
stress-ng: info: [109] successful run completed in 1.00s
#
After applying this patch, the kernel can pass the test.
# stress-ng --verbose --physpage 1 -t 1
stress-ng: debug: [104] 4 processors online, 4 processors configured stress-ng: info: [104] dispatching hogs: 1 physpage
stress-ng: info: [104] cache allocate: using defaults, can't determine cache details from sysfs
stress-ng: debug: [104] cache allocate: default cache size: 2048K
stress-ng: debug: [104] starting stressors
stress-ng: debug: [104] 1 stressor spawned
stress-ng: debug: [105] stress-ng-physpage: started [105] (instance 0) stress-ng: debug: [105] stress-ng-physpage: exited [105] (instance 0) stress-ng: debug: [104] process [105] terminated
stress-ng: info: [104] successful run completed in 1.01s
#
Cc: stable@...r.kernel.org
Signed-off-by: Vincent Chen <vincent.chen@...ive.com>
Reviewed-by: Anup Patel <anup@...infault.org>
Reviewed-by: Yash Shah <yash.shah@...ive.com>
Tested-by: Yash Shah <yash.shah@...ive.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@...gle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/riscv/mm/init.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -116,7 +116,8 @@ void __init setup_bootmem(void)
memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
set_max_mapnr(PFN_DOWN(mem_size));
- max_low_pfn = PFN_DOWN(memblock_end_of_DRAM());
+ max_pfn = PFN_DOWN(memblock_end_of_DRAM());
+ max_low_pfn = max_pfn;
#ifdef CONFIG_BLK_DEV_INITRD
setup_initrd();
Powered by blists - more mailing lists