lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 24 Jun 2020 20:14:17 +0100
From:   Chris Wilson <chris@...is-wilson.co.uk>
To:     linux-mm@...ck.org
Cc:     linux-kernel@...r.kernel.org, intel-gfx@...ts.freedesktop.org,
        Chris Wilson <chris@...is-wilson.co.uk>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jan Kara <jack@...e.cz>,
        Jérôme Glisse <jglisse@...hat.com>,
        John Hubbard <jhubbard@...dia.com>,
        Claudio Imbrenda <imbrenda@...ux.ibm.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        Jason Gunthorpe <jgg@...pe.ca>
Subject: [PATCH] mm: Skip opportunistic reclaim for dma pinned pages

A general rule of thumb is that shrinkers should be fast and effective.
They are called from direct reclaim at the most incovenient of times when
the caller is waiting for a page. If we attempt to reclaim a page being
pinned for active dma [pin_user_pages()], we will incur far greater
latency than a normal anonymous page mapped multiple times. Worse the
page may be in use indefinitely by the HW and unable to be reclaimed
in a timely manner.

A side effect of the LRU shrinker not being dma aware is that we will
often attempt to perform direct reclaim on the persistent group of dma
pages while continuing to use the dma HW (an issue as the HW may already
be actively waiting for the next user request), and even attempt to
reclaim a partially allocated dma object in order to satisfy pinning
the next user page for that object.

It is to be expected that such pages are made available for reclaim at
the end of the dma operation [unpin_user_pages()], and for truly
longterm pins to be proactively recovered via device specific shrinkers
[i.e. stop the HW, allow the pages to be returned to the system, and
then compete again for the memory].

Signed-off-by: Chris Wilson <chris@...is-wilson.co.uk>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Jan Kara <jack@...e.cz>
Cc: Jérôme Glisse <jglisse@...hat.com>
Cc: John Hubbard <jhubbard@...dia.com>
Cc: Claudio Imbrenda <imbrenda@...ux.ibm.com>
Cc: Jan Kara <jack@...e.cz>
Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Cc: Jason Gunthorpe <jgg@...pe.ca>
---
This seems perhaps a little devious and overzealous. Is there a more
appropriate TTU flag? Would there be a way to limit its effect to say
FOLL_LONGTERM? Doing the migration first would seem to be sensible if
we disable opportunistic migration for the duration of the pin.
---
 mm/rmap.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/rmap.c b/mm/rmap.c
index 5fe2dedce1fc..374c6e65551b 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1393,6 +1393,22 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 	    is_zone_device_page(page) && !is_device_private_page(page))
 		return true;
 
+	/*
+	 * Try and fail early to revoke a costly DMA pinned page.
+	 *
+	 * Reclaiming an active DMA page requires stopping the hardware
+	 * and flushing access. [Hardware that does support pagefaulting,
+	 * and so can quickly revoke DMA pages at any time, does not need
+	 * to pin the DMA page.] At worst, the page may be indefinitely in
+	 * use by the hardware. Even at best it will take far longer to
+	 * revoke the access via the mmu notifier, forcing that latency
+	 * onto our callers rather than the consumer of the HW. As we are
+	 * called during opportunistic direct reclaim, declare the
+	 * opportunity cost too high and ignore the page.
+	 */
+	if (page_maybe_dma_pinned(page))
+		return true;
+
 	if (flags & TTU_SPLIT_HUGE_PMD) {
 		split_huge_pmd_address(vma, address,
 				flags & TTU_SPLIT_FREEZE, page);
-- 
2.20.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ