[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191122055544.3299-121-sashal@kernel.org>
Date: Fri, 22 Nov 2019 00:55:40 -0500
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: James Morse <james.morse@....com>, Borislav Petkov <bp@...e.de>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Sasha Levin <sashal@...nel.org>, linux-acpi@...r.kernel.org
Subject: [PATCH AUTOSEL 4.14 123/127] ACPI / APEI: Switch estatus pool to use vmalloc memory
From: James Morse <james.morse@....com>
[ Upstream commit 0ac234be1a9497498e57d958f4251f5257b116b4 ]
The ghes code is careful to parse and round firmware's advertised
memory requirements for CPER records, up to a maximum of 64K.
However when ghes_estatus_pool_expand() does its work, it splits
the requested size into PAGE_SIZE granules.
This means if firmware generates 5K of CPER records, and correctly
describes this in the table, __process_error() will silently fail as it
is unable to allocate more than PAGE_SIZE.
Switch the estatus pool to vmalloc() memory. On x86 vmalloc() memory
may fault and be fixed up by vmalloc_fault(). To prevent this call
vmalloc_sync_all() before an NMI handler could discover the memory.
Signed-off-by: James Morse <james.morse@....com>
Reviewed-by: Borislav Petkov <bp@...e.de>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
drivers/acpi/apei/ghes.c | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index 9c31c7cd5cb5e..cd6fae6ad4c2a 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -170,40 +170,40 @@ static int ghes_estatus_pool_init(void)
return 0;
}
-static void ghes_estatus_pool_free_chunk_page(struct gen_pool *pool,
+static void ghes_estatus_pool_free_chunk(struct gen_pool *pool,
struct gen_pool_chunk *chunk,
void *data)
{
- free_page(chunk->start_addr);
+ vfree((void *)chunk->start_addr);
}
static void ghes_estatus_pool_exit(void)
{
gen_pool_for_each_chunk(ghes_estatus_pool,
- ghes_estatus_pool_free_chunk_page, NULL);
+ ghes_estatus_pool_free_chunk, NULL);
gen_pool_destroy(ghes_estatus_pool);
}
static int ghes_estatus_pool_expand(unsigned long len)
{
- unsigned long i, pages, size, addr;
- int ret;
+ unsigned long size, addr;
ghes_estatus_pool_size_request += PAGE_ALIGN(len);
size = gen_pool_size(ghes_estatus_pool);
if (size >= ghes_estatus_pool_size_request)
return 0;
- pages = (ghes_estatus_pool_size_request - size) / PAGE_SIZE;
- for (i = 0; i < pages; i++) {
- addr = __get_free_page(GFP_KERNEL);
- if (!addr)
- return -ENOMEM;
- ret = gen_pool_add(ghes_estatus_pool, addr, PAGE_SIZE, -1);
- if (ret)
- return ret;
- }
- return 0;
+ addr = (unsigned long)vmalloc(PAGE_ALIGN(len));
+ if (!addr)
+ return -ENOMEM;
+
+ /*
+ * New allocation must be visible in all pgd before it can be found by
+ * an NMI allocating from the pool.
+ */
+ vmalloc_sync_all();
+
+ return gen_pool_add(ghes_estatus_pool, addr, PAGE_ALIGN(len), -1);
}
static int map_gen_v2(struct ghes *ghes)
--
2.20.1
Powered by blists - more mailing lists