lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 28 Sep 2021 10:53:05 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Nadav Amit <nadav.amit@...il.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Linux-MM <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Peter Xu <peterx@...hat.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Minchan Kim <minchan@...nel.org>,
        Colin Cross <ccross@...gle.com>,
        Suren Baghdasarya <surenb@...gle.com>,
        Mike Rapoport <rppt@...ux.vnet.ibm.com>
Subject: Re: [RFC PATCH 0/8] mm/madvise: support
 process_madvise(MADV_DONTNEED)

>>
>> Again, thanks for the details. I guess this should basically work, although it involves a lot of complexity (read: all flavors of uffd on other processes). And I am no so sure about performance aspects. "Performance is not as bad as you think" doesn't sound like the words you would want to hear from a car dealer ;) So there has to be another big benefit to do such user space swapping.
> 
> There is some complexity, indeed. Worse, there are some quirks of UFFD
> that make life hard for no reason and some uffd and iouring bugs.
> 
> As for my sales pitch - I agree that I am not the best car dealer… :(

:)

> When I say performance is not bad, I mean that the core operations of
> page-fault handling, prefetch and reclaim do not induce high overhead
> *after* the improvements I sent or mentioned.
> 
> The benefit of doing so from userspace is that you have full control
> over the reclaim/prefetch policies, so you may be able to make better
> decisions.
> 
> Some workloads have predictable access patterns (see for instance "MAGE:
> Nearly Zero-Cost Virtual Memory for Secure Computation”, OSDI’21). You may
> be handle such access patterns without requiring intrusive changes to the
> workload.

Thanks for the pointer.

And my question would be if something like DAMON would actually be what 
you want.

> 
> 
>>
>>> I am aware that there are some caveats, as zapping the memory does not
>>> guarantee that the memory would be freed since it might be pinned for a
>>> variety of reasons. That's the reason I mentioned the processes have "some
>>> level of cooperation" with the manager. It is not intended to deal with
>>> adversaries or uncommon corner cases (e.g., processes that use UFFD for
>>> their own reasons).
>>
>> It's not only long-term pinnings. Pages could have been de-duplicated (COW after fork, KSM, shared zeropage). Further, you'll most probably lose any kind of "aging" ("accessed") information on pages, or how would you track that?
> 
> I know it’s not just long-term pinnings. That’s what “variety of reasons”
> stood for. ;-)
> 
> Aging is a tool for certain types of reclamation policies. Some do not
> require it (e.g., random). You can also have compiler/application-guided
> reclamation policies. If you are really into “aging”, you may be able
> to use PEBS or other CPU facilities to track it.
> 
> Anyhow, the access-bit by itself not such a great solution to track
> aging. Setting it can induce overheads of >500 cycles from my (and
> others) experience.

Well, I'm certainly no expert on that; I would assume it's relevant in 
corner cases only: if you're application accesses all it's memory 
permanently a swap setup is already "broken". If you have plenty of old 
memory (VMs, databases, ...) it should work reasonably well. But yeah, 
detecting the working set size is a problematic problem, and "access"
bits can be sub-optimal.

After all, that's what the Linux kernel has been relying on for a long 
time ... and IIRC it might be extended by multiple "aging" queues soon.

> 
>>
>> Although I can see that this might work, I do wonder if it's a use case worth supporting. As Michal correctly raised, we already have other infrastructure in place to trigger swapin/swapout. I recall that also damon wants to let you write advanced policies for that by monitoring actual access characteristics.
> 
> Hints, as those that Michal mentioned, prevent the efficient use of
> userfaultfd. Using MADV_PAGEOUT will not trigger another uffd event
> when the page is brought back from swap. So using
> MADV_PAGEOUT/MADV_WILLNEED does not allow you to have a custom
> prefetch policy, for instance. It would also require you to live
> with the kernel reclamation/IO stack for better and worse.

Would more uffd (or similar) events help?

> 
> As for DAMON, I am not very familiar with it, but from what I remember
> it seemed to look on a similar direction. IMHO it is more intrusive
> and less configurable (although it can have the advantage of better
> integration with various kernel mechanism). I was wondering for a
> second why you give me such a hard time for a pretty straight-forward
> extension for process_madvise(), but then I remembered that DAMON got
> into the kernel after >30 versions, so I’ll shut up about that. ;-)

It took ... quite a long time, indeed :)

> 
>>
>>> Putting aside my use-case (which I am sure people would be glad to criticize),
>>> I can imagine debuggers or emulators may also find use for similar schemes
>>> (although I do not have concrete use-cases for them).
>>
>> I'd be curious about use cases for debuggers/emulators. Especially for emulators I'd guess it makes more sense to just do it within the process. And for debuggers, I'm having a hard time why it would make sense to throw away a page instead of just overwriting it with $PATTERN (e.g., 0). But I'm sure people can be creative :)
> 
> I have some more vague ideas, but I am afraid that you will keep
> saying that it makes more sense to handle such events from within
> a process. I am not sure that this is true. Even for the emulators
> that we discuss, the emulated program might run in a different
> address space (for sandboxing). You may be able to avoid the need
> for remote-UFFD and get away with the current non-cooperative
> UFFD, but zapping the memory (for atomic updates) would still
> require process_madvise(MADV_DONTNEED) [putting aside various
> ptrace solutions].
> 
> Anyhow, David, I really appreciate your feedback. And you make
> strong points about issues I encounter. Yet, eventually, I think
> that the main question in this discussion is whether enabling
> process_madvise(MADV_DONTNEED) is any different - from security
> point of view - than process_vm_writev(), not to mention ptrace.
> If not, then the same security guards should suffice, I would
> argue.
> 

You raise a very excellent point (and it should have been part of your 
initial sales pitch): how does it differ to process_vm_writev().

I can say that it differs in a way that you can break applications in 
more extreme ways. Let me give you two examples:

1. longterm pinnings: you raised this yourself; this can break an 
application silently and there is barely a safe way your tooling could 
handle it.

2. pagemap: applications can depend on the populated(present |swap) 
information in the pagemap for correctness. For example, there was 
recently a discussion to use pagemap information to speed up live 
migration of VMs, by skipping migration of !populated pages. There is 
currently no way your tooling can fake that. In comparison, ordinary 
swapping in the kernel can handle it.

Is it easy to break an application with process_vm_writev()? Yes. When 
talking about dynamic debugging, it's expected that you break the target 
already -- or the target is already broken. Is it easier to break an 
application with process_madvise(MADV_DONTNEED)? I'd say yes, especially 
when implementing something way beyond debugging as you describe.


I'm giving you "a hard time" for the reason Michal raised: we discussed 
this in the past already at least two times IIRC and "it is a free 
ticket to all sorts of hard to debug problem" in our opinion; especially 
when we mess around in other process address spaces besides for debugging.

I'm not the person to ack/nack this, I'm just asking the questions :)

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ