lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 2 Nov 2017 12:10:39 -0400 From: Pavel Tatashin <pasha.tatashin@...cle.com> To: Michal Hocko <mhocko@...nel.org> Cc: steven.sistare@...cle.com, daniel.m.jordan@...cle.com, akpm@...ux-foundation.org, mgorman@...hsingularity.net, linux-mm@...ck.org, linux-kernel@...r.kernel.org Subject: Re: [PATCH v1 1/1] mm: buddy page accessed before initialized >> >> Yes, but as I said, unfortunately memset(1) with CONFIG_VM_DEBUG does not >> catch this case. So, when CONFIG_VM_DEBUG is enabled kexec reboots without >> issues. > > Can we make the init pattern to catch this? Unfortunately, that is not easy: memset() gives us only one byte to play with, and if we use something else that will make CONFIG_VM_DEBUG unacceptably slow. One byte is not enough to trigger the pattern that satisfy page_is_buddy() logic. I have tried it. With kexec, however it is more predictable: we use the same memory during boot to allocate vmemmap, and therefore the struct pages are more like "valid" struct pages from the previous boot. > >>>>>> This is why we must initialize the computed buddy page beforehand. >>>>> >>>>> Ble, this is really ugly. I will think about it more. >>>>> >>>> >>>> Another approach that I considered is to split loop inside >>>> deferred_init_range() into two loops: one where we initialize pages by >>>> calling __init_single_page(), another where we free them to buddy allocator >>>> by calling deferred_free_range(). >>> >>> Yes, that would make much more sense to me. >>> >> >> Ok, so should I submit a new patch with two loops? (The logic within loops >> is going to be the same: > > Could you post it please? > >> if (!pfn_valid_within(pfn)) { >> } else if (!(pfn & nr_pgmask) && !pfn_valid(pfn)) { >> } else if (!meminit_pfn_in_nid(pfn, nid, &nid_init_state)) { >> } else if (page && (pfn & nr_pgmask)) { >> >> This fix was already added into mm-tree as >> mm-deferred_init_memmap-improvements-fix-2.patch > > I think Andrew can drop it and replace by a different patch. > The new patch is coming, I will test it on two machines where I observed the problem. Thank you, Pasha
Powered by blists - more mailing lists