lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210326112650.307890-1-slyfox@gentoo.org>
Date:   Fri, 26 Mar 2021 11:26:50 +0000
From:   Sergei Trofimovich <slyfox@...too.org>
To:     Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org
Cc:     linux-kernel@...r.kernel.org,
        Sergei Trofimovich <slyfox@...too.org>
Subject: [PATCH] mm: page_alloc: ignore init_on_free=1 for page alloc

init_on_free=1 does not guarantee that free pages contain only zero bytes.

Some examples:
1. page_poison=on takes presedence over init_on_alloc=1 / ini_on_free=1
2. free_pages_prepare() always poisons pages:

       if (want_init_on_free())
           kernel_init_free_pages(page, 1 << order);
       kernel_poison_pages(page, 1 << order

I observed use of poisoned pages as the crash on ia64 booted with
init_on_free=1 init_on_alloc=1 (CONFIG_PAGE_POISONING=y config).
There pmd page contained 0xaaaaaaaa poison pages and led to early crash.

The change drops the assumption that init_on_free=1 guarantees free
pages to contain zeros.

Alternative would be to make interaction between runtime poisoning and
sanitizing options and build-time debug flags like CONFIG_PAGE_POISONING
more coherent. I took the simpler path.

Tested the fix on rx3600.

CC: Andrew Morton <akpm@...ux-foundation.org>
CC: linux-mm@...ck.org
Signed-off-by: Sergei Trofimovich <slyfox@...too.org>
---
 mm/page_alloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index cfc72873961d..d57d9b4f7089 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2301,7 +2301,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
 	kernel_unpoison_pages(page, 1 << order);
 	set_page_owner(page, order, gfp_flags);
 
-	if (!want_init_on_free() && want_init_on_alloc(gfp_flags))
+	if (want_init_on_alloc(gfp_flags))
 		kernel_init_free_pages(page, 1 << order);
 }
 
-- 
2.31.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ