lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJHvVcie+N+4j60m_Dxh7QzbZLmsjnq2-04peuqE8VkkMq984A@mail.gmail.com>
Date: Mon, 11 Mar 2024 11:59:59 -0700
From: Axel Rasmussen <axelrasmussen@...gle.com>
To: Peter Xu <peterx@...hat.com>
Cc: David Hildenbrand <david@...hat.com>, Mirsad Todorovac <mirsad.todorovac@....unizg.hr>, linux-mm@...ck.org, 
	Andrew Morton <akpm@...ux-foundation.org>, Shuah Khan <shuah@...nel.org>, 
	linux-kselftest@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: BUG selftests/mm]

On Mon, Mar 11, 2024 at 8:12 AM Peter Xu <peterx@...hat.com> wrote:
>
> On Mon, Mar 11, 2024 at 03:48:14PM +0100, David Hildenbrand wrote:
> > On 11.03.24 15:35, Peter Xu wrote:
> > > On Mon, Mar 11, 2024 at 10:31:41AM +0100, David Hildenbrand wrote:
> > > > On 09.03.24 20:12, Mirsad Todorovac wrote:
> > > > > Hi,
> > > > >
> > > > > Routine run of the test in net-next gave also this mm unit error.
> > > > >
> > > > > root@...iant:tools/testing/selftests/mm# ./uffd-unit-tests
> > > > > Testing UFFDIO_API (with syscall)... done
> > > > > Testing UFFDIO_API (with /dev/userfaultfd)... done
> > > > > Testing register-ioctls on anon... done
> > > > > Testing register-ioctls on shmem... done
> > > > > Testing register-ioctls on shmem-private... done
> > > > > Testing register-ioctls on hugetlb... skipped [reason: memory allocation failed]
> > > > > Testing register-ioctls on hugetlb-private... skipped [reason: memory allocation failed]
> > > > > Testing zeropage on anon... done
> > > > > Testing zeropage on shmem... done
> > > > > Testing zeropage on shmem-private... done
> > > > > Testing zeropage on hugetlb... skipped [reason: memory allocation failed]
> > > > > Testing zeropage on hugetlb-private... skipped [reason: memory allocation failed]
> > > > > Testing move on anon... done
> > > > > Testing move-pmd on anon... done
> > > > > Testing move-pmd-split on anon... done
> > > > > Testing wp-fork on anon... done
> > > > > Testing wp-fork on shmem... done
> > > > > Testing wp-fork on shmem-private... done
> > > > > Testing wp-fork on hugetlb... skipped [reason: memory allocation failed]
> > > > > Testing wp-fork on hugetlb-private... skipped [reason: memory allocation failed]
> > > > > Testing wp-fork-with-event on anon... done
> > > > > Testing wp-fork-with-event on shmem... done
> > > > > Testing wp-fork-with-event on shmem-private... done
> > > > > Testing wp-fork-with-event on hugetlb... skipped [reason: memory allocation failed]
> > > > > Testing wp-fork-with-event on hugetlb-private... skipped [reason: memory allocation failed]
> > > > > Testing wp-fork-pin on anon... done
> > > > > Testing wp-fork-pin on shmem... done
> > > > > Testing wp-fork-pin on shmem-private... done
> > > > > Testing wp-fork-pin on hugetlb... skipped [reason: memory allocation failed]
> > > > > Testing wp-fork-pin on hugetlb-private... skipped [reason: memory allocation failed]
> > > > > Testing wp-fork-pin-with-event on anon... done
> > > > > Testing wp-fork-pin-with-event on shmem... done
> > > > > Testing wp-fork-pin-with-event on shmem-private... done
> > > > > Testing wp-fork-pin-with-event on hugetlb... skipped [reason: memory allocation failed]
> > > > > Testing wp-fork-pin-with-event on hugetlb-private... skipped [reason: memory allocation failed]
> > > > > Testing wp-unpopulated on anon... done
> > > > > Testing minor on shmem... done
> > > > > Testing minor on hugetlb... skipped [reason: memory allocation failed]
> > > > > Testing minor-wp on shmem... done
> > > > > Testing minor-wp on hugetlb... skipped [reason: memory allocation failed]
> > > > > Testing minor-collapse on shmem... done
> > > > > Testing sigbus on anon... done
> > > > > Testing sigbus on shmem... done
> > > > > Testing sigbus on shmem-private... done
> > > > > Testing sigbus on hugetlb... skipped [reason: memory allocation failed]
> > > > > Testing sigbus on hugetlb-private... skipped [reason: memory allocation failed]
> > > > > Testing sigbus-wp on anon... done
> > > > > Testing sigbus-wp on shmem... done
> > > > > Testing sigbus-wp on shmem-private... done
> > > > > Testing sigbus-wp on hugetlb... skipped [reason: memory allocation failed]
> > > > > Testing sigbus-wp on hugetlb-private... skipped [reason: memory allocation failed]
> > > > > Testing events on anon... done
> > > > > Testing events on shmem... done
> > > > > Testing events on shmem-private... done
> > > > > Testing events on hugetlb... skipped [reason: memory allocation failed]
> > > > > Testing events on hugetlb-private... skipped [reason: memory allocation failed]
> > > > > Testing events-wp on anon... done
> > > > > Testing events-wp on shmem... done
> > > > > Testing events-wp on shmem-private... done
> > > > > Testing events-wp on hugetlb... skipped [reason: memory allocation failed]
> > > > > Testing events-wp on hugetlb-private... skipped [reason: memory allocation failed]
> > > > > Testing poison on anon... done
> > > > > Testing poison on shmem... done
> > > > > Testing poison on shmem-private... done
> > > > > Testing poison on hugetlb... skipped [reason: memory allocation failed]
> > > > > Testing poison on hugetlb-private... skipped [reason: memory allocation failed]
> > > > > Userfaults unit tests: pass=42, skip=24, fail=0 (total=66)
> > > > > root@...iant:tools/testing/selftests/mm# grep -i huge /proc/meminfo
> > > > >
> > > > > It resulted in alarming errors in the syslog:
> > > > >
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055103] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 4631e000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055132] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 46320000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055160] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 46322000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055189] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 46324000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055218] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 46326000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055250] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 46328000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055278] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 4632a000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055307] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 4632c000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055336] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 4632e000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055366] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 46330000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055395] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 46332000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055423] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 46334000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055452] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 46336000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055480] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 46338000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055509] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 4633a000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055538] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 4633c000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055567] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 4633e000
> > > > > Mar  9 19:48:24 defiant kernel: [77187.055597] MCE: Killing uffd-unit-tests:1321817 due to hardware memory corruption fault at 46340000
> > > > >
> > > > > At this point, it can be problem with my box's memory chips, or something with HUGETLB.
> > > > >
> > > > > However, since the "classic" allocations were successful, the problem might be in huge pages, or
> > > > > if I understood well, in deliberate poisoning of pages?
> > > > >
> > > >
> > > > Isn't that just the (expected) side effect of UFFDIO_POISON tests?
> > > >
> > > > IOW, there is no problem here. We are poisoning virtual memory locations
> > > > (not actual memory) and expect a SIGBUS on next access. While testing that,
> > > > we receive these messages.
> > >
> > > Correct.
> > >
> > > >
> > > > The "ugly" thing here seems to be that we can trigger repeated pr_err() from
> > > > user space. There is no rate-limiting in place. Maybe UFFDIO_POISON requires
> > > > root permissions so this cannot be exploited by unprivileged user space to
> > > > flood the system log?
> > > >
> > > > CCing Axel
> > >
> > > This is pretty unfortunate.
> > >
> > > I'm not concerned too much on flooding whoever kicks off the selftests, but
> > > indeed this seems to be able to be used by anyone to trigger such endless
> > > reports in dmesg.
> >
> > Right.
> >
> > >
> > > The issue with requiring a privilege means any hypervisor that will need to
> > > use this to emulate memory errors will also require such privilege, and it
> > > can be a problem.
> > >
> >
> > Yes, we don't want that.
> >
> > > Logically such "hwpoison errors" are not real so it is not needed to be
> > > reported in dmesg, but now we're leveraging it to be exactly the same as a
> > > real hw error to share the code path, iiuc (e.g. on MCE injections).
> > >
> > > One option is to use a different marker reflecting that such hwpoison error
> > > is internal, so we don't need to report in dmesg. That'll also require
> > > (besides another bit in pte markers) one extra VM_FAULT_* flag just for
> > > such reports.  Might be slightly an overkill, but I don't see another
> > > better way; not reporting HWPOISON will complicate at least kvm use case
> > > even more.
> > >
> > > Or.. does syslog has its own protection in general for such printk floods?
> > > It'll be easier if that's not a concern to flood then, but I'm not sure
> > > from that regard.
> >
> > From what I know, flooding is considered problematic and we fix it up using
> > "Fixes:" commits. See 1b0a151c10a6d823f033023b9fdd9af72a89591b as one
> > "recent" example.
> >
> >
> > Usually we switch to the _ratelimited() functions, maybe
> > pr_warn_ratelimited() is good enough? But we'd lose some details on a "real"
> > MCE storm, though.
>
> Yeah, I didn't consider that previously because I thought leaking MCE
> addresses might be a problem.
>
> But now thinking it again, it'll be great if pr_err_ratelimited() works
> here (I think we'd still want to report them with "err" not "warnings",
> btw).
>
> I don't worry too much on MCE storm, as in that case explicit addresses may
> not be necessary if the whole system is on risk.  What I don't know however
> is whether the addresses may still matter if e.g. two continuous MCEs are
> reported in a small time window, and whether those addresses are a concern
> in that case if some got lost.
>
> My MCE experience is pretty limited, so I don't have an answer to that.
> Maybe it can be verified by proposing a patch like that and see whether
> there can be any objections making it rate limtied.  I'll leave that to
> Axel to decide how to move forward.

I'd prefer not to require root or CAP_SYS_ADMIN or similar for
UFFDIO_POISON, because those control access to lots more things
besides, which we don't necessarily want the process using UFFD to be
able to do. :/

Ratelimiting seems fairly reasonable to me. I do see the concern about
dropping some addresses though. Perhaps we can mitigate that concern
by defining our own ratelimit interval/burst configuration? Another
idea would be to only ratelimit it if !CONFIG_DEBUG_VM or similar. Not
sure if that's considered valid or not. :)

>
> --
> Peter Xu
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ