lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 27 Feb 2017 21:57:45 +0000
From:   Matt Fleming <>
To:     Mel Gorman <>
Cc:     Dave Young <>,
        Nicolai Stange <>,
        Ard Biesheuvel <>,
        Thomas Gleixner <>,
        Ingo Molnar <>,
        "H. Peter Anvin" <>,,,,
        Mika Penttilä <>,
        Andrew Morton <>,,
        Vlastimil Babka <>, Michal Hocko <>
Subject: Re: [PATCH v2 2/2] efi: efi_mem_reserve(): don't reserve through
 memblock after mm_init()

On Mon, 09 Jan, at 01:31:52PM, Mel Gorman wrote:
> Well, you could put in a __init global variable about availability into
> mm/memblock.c and then check it in memblock APIs like memblock_reserve()
> to BUG_ON? I know BUG_ON is frowned upon but this is not likely to be a
> situation that can be sensibly recovered.

What about something like this?

BUG_ON() shouldn't actually be necessary because I couldn't think of a
situation where A) memblock would be unavailable and B) returning an
error would prevent us from making progress.


>From 1c1c06664d23c5d256016918c54e01802af4e891 Mon Sep 17 00:00:00 2001
From: Matt Fleming <>
Date: Mon, 27 Feb 2017 21:14:29 +0000
Subject: [PATCH] mm/memblock: Warn if used after slab is up and prevent memory

Historically there have been many memory corruption bugs caused by
using the memblock API after its internal data structures have been
destroyed. The latest bug was fixed in commit,

  20b1e22d01a4 ("x86/efi: Don't allocate memmap through memblock after mm_init()")

Instead of modifying the memblock data structures that no longer exist
and silently corrupting memory, WARN and return with an error.

Cc: Nicolai Stange <>
Cc: Dave Young <>
Cc: Ard Biesheuvel <>
Cc: Mel Gorman <>
Cc: Andrew Morton <>
Cc: Vlastimil Babka <>
Cc: Michal Hocko <>
Signed-off-by: Matt Fleming <>
 mm/memblock.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/memblock.c b/mm/memblock.c
index 7608bc305936..4dbd86f2fddb 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -530,6 +530,9 @@ int __init_memblock memblock_add_range(struct memblock_type *type,
 	if (!size)
 		return 0;
+	if (WARN_ONCE(slab_is_available(), "memblock no longer available\n"))
+		return -EINVAL;
 	/* special case for empty array */
 	if (type->regions[0].size == 0) {
 		WARN_ON(type->cnt != 1 || type->total_size);
@@ -648,6 +651,9 @@ static int __init_memblock memblock_isolate_range(struct memblock_type *type,
 	if (!size)
 		return 0;
+	if (WARN_ONCE(slab_is_available(), "memblock no longer available\n"))
+		return -EINVAL;
 	/* we'll create at most two more regions */
 	while (type->cnt + 2 > type->max)
 		if (memblock_double_array(type, base, size) < 0)

Powered by blists - more mailing lists