lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87imakt6zt.fsf@mpe.ellerman.id.au>
Date:   Thu, 05 Nov 2020 21:47:50 +1100
From:   Michael Ellerman <mpe@...erman.id.au>
To:     David Hildenbrand <david@...hat.com>
Cc:     David Hildenbrand <david@...hat.com>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, linuxppc-dev@...ts.ozlabs.org,
        Michal Hocko <mhocko@...nel.org>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Paul Mackerras <paulus@...ba.org>,
        Rashmica Gupta <rashmica.g@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mike Rapoport <rppt@...nel.org>,
        Michal Hocko <mhocko@...e.com>,
        Oscar Salvador <osalvador@...e.de>,
        Wei Yang <richard.weiyang@...ux.alibaba.com>
Subject: Re: [PATCH v1 4/4] powernv/memtrace: don't abuse memory hot(un)plug infrastructure for memory allocations

David Hildenbrand <david@...hat.com> writes:
>> Am 05.11.2020 um 03:53 schrieb Michael Ellerman <mpe@...erman.id.au>:
>> 
>> David Hildenbrand <david@...hat.com> writes:
>>> Let's use alloc_contig_pages() for allocating memory and remove the
>>> linear mapping manually via arch_remove_linear_mapping(). Mark all pages
>>> PG_offline, such that they will definitely not get touched - e.g.,
>>> when hibernating. When freeing memory, try to revert what we did.
>>> The original idea was discussed in:
>>> https://lkml.kernel.org/r/48340e96-7e6b-736f-9e23-d3111b915b6e@redhat.com
>>> This is similar to CONFIG_DEBUG_PAGEALLOC handling on other
>>> architectures, whereby only single pages are unmapped from the linear
>>> mapping. Let's mimic what memory hot(un)plug would do with the linear
>>> mapping.
>>> We now need MEMORY_HOTPLUG and CONTIG_ALLOC as dependencies.
>>> Simple test under QEMU TCG (10GB RAM, single NUMA node):
>>> sh-5.0# mount -t debugfs none /sys/kernel/debug/
>>> sh-5.0# cat /sys/devices/system/memory/block_size_bytes
>>> 40000000
>>> sh-5.0# echo 0x40000000 > /sys/kernel/debug/powerpc/memtrace/enable
>>> [   71.052836][  T356] memtrace: Allocated trace memory on node 0 at 0x0000000080000000
>>> sh-5.0# echo 0x80000000 > /sys/kernel/debug/powerpc/memtrace/enable
>>> [   75.424302][  T356] radix-mmu: Mapped 0x0000000080000000-0x00000000c0000000 with 64.0 KiB pages
>>> [   75.430549][  T356] memtrace: Freed trace memory back on node 0
>>> [   75.604520][  T356] memtrace: Allocated trace memory on node 0 at 0x0000000080000000
>>> sh-5.0# echo 0x100000000 > /sys/kernel/debug/powerpc/memtrace/enable
>>> [   80.418835][  T356] radix-mmu: Mapped 0x0000000080000000-0x0000000100000000 with 64.0 KiB pages
>>> [   80.430493][  T356] memtrace: Freed trace memory back on node 0
>>> [   80.433882][  T356] memtrace: Failed to allocate trace memory on node 0
>>> sh-5.0# echo 0x40000000 > /sys/kernel/debug/powerpc/memtrace/enable
>>> [   91.920158][  T356] memtrace: Allocated trace memory on node 0 at 0x0000000080000000
>> 
>> I gave this a quick spin on a real machine, seems to work OK.
>> 
>> I don't have the actual memtrace tools setup to do an actual trace, will
>> try and get someone to test that also.
>> 
>> One observation is that previously the memory was zeroed when enabling
>> the memtrace, whereas now it's not.
>> 
>> eg, before:
>> 
>> # hexdump -C /sys/kernel/debug/powerpc/memtrace/00000000/trace 
>> 00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
>> *
>> 10000000
>> 
>> whereas after:
>> 
>> # hexdump -C /sys/kernel/debug/powerpc/memtrace/00000000/trace
>> 00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
>> *
>> 00000080  e0 fd 43 00 00 00 00 00  e0 fd 43 00 00 00 00 00  |..C.......C.....|
>> 00000090  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
>> *
>> 00000830  98 bf 39 00 00 00 00 00  98 bf 39 00 00 00 00 00  |..9.......9.....|
>> 00000840  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
>> *
>> 000008a0  b0 c8 47 00 00 00 00 00  b0 c8 47 00 00 00 00 00  |..G.......G.....|
>> 000008b0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
>> ...
>> 0fffff70  78 53 49 7d 00 00 29 2e  88 00 92 41 01 00 49 39  |xSI}..)....A..I9|
>> 0fffff80  b4 07 4a 7d 28 f8 00 7d  00 48 08 7c 0c 00 c2 40  |..J}(..}.H.|...@|
>> 0fffff90  2d f9 40 7d f0 ff c2 40  b4 07 0a 7d 00 48 8a 7f  |-.@....@...}.H..|
>> 0fffffa0  70 fe 9e 41 cc ff ff 4b  00 00 00 60 00 00 00 60  |p..A...K...`...`|
>> 0fffffb0  01 00 00 48 00 00 00 60  00 00 a3 2f 0c fd 9e 40  |...H...`.../...@|
>> 0fffffc0  00 00 a2 3c 00 00 a5 e8  00 00 62 3c 00 00 63 e8  |...<......b<..c.|
>> 0fffffd0  01 00 20 39 83 02 80 38  00 00 3c 99 01 00 00 48  |.. 9...8..<....H|
>> 0fffffe0  00 00 00 60 e4 fc ff 4b  00 00 80 38 78 fb e3 7f  |...`...K...8x...|
>> 0ffffff0  01 00 00 48 00 00 00 60  2c fe ff 4b 00 00 00 60  |...H...`,..K...`|
>> 10000000
>> 
>> 
>> That's a nice way for root to read kernel memory, so we should probably
>> add a __GFP_ZERO or memset in there somewhere.
>
> Thanks for catching that! Will have a look on Monday if
> alloc_contig_pages() already properly handled __GFP_ZERO so we can use
> it, otherwise I‘ll fix that.

I had a quick look and didn't see it, but maybe it is in there somewhere.

> I don‘t recall that memory hotunplug does any zeroing - that‘s why I
> didn‘t add any explicit zeroing. Could be you were just lucky in your
> experiment - I assume we‘ll leak kernel memory already.

Hmm yeah good point. I did try it multiple times, and I never get
anything non-zero with the existing code.

I guess it's just that the new method is more likely to give us memory
that's already been used for something.

So I guess that's not actually a problem with your patch, it's just
exposing an existing issue.

cheers

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ