lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190520115252.935249966@linuxfoundation.org>
Date:   Mon, 20 May 2019 14:14:57 +0200
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     linux-kernel@...r.kernel.org
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        stable@...r.kernel.org, Eric Dumazet <edumazet@...gle.com>,
        Al Viro <viro@...iv.linux.org.uk>,
        Matthew Wilcox <willy@...radead.org>
Subject: [PATCH 5.0 117/123] iov_iter: optimize page_copy_sane()

From: Eric Dumazet <edumazet@...gle.com>

commit 6daef95b8c914866a46247232a048447fff97279 upstream.

Avoid cache line miss dereferencing struct page if we can.

page_copy_sane() mostly deals with order-0 pages.

Extra cache line miss is visible on TCP recvmsg() calls dealing
with GRO packets (typically 45 page frags are attached to one skb).

Bringing the 45 struct pages into cpu cache while copying the data
is not free, since the freeing of the skb (and associated
page frags put_page()) can happen after cache lines have been evicted.

Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Cc: Al Viro <viro@...iv.linux.org.uk>
Signed-off-by: Al Viro <viro@...iv.linux.org.uk>
Cc: Matthew Wilcox <willy@...radead.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>

---
 lib/iov_iter.c |   17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -861,8 +861,21 @@ EXPORT_SYMBOL(_copy_from_iter_full_nocac
 
 static inline bool page_copy_sane(struct page *page, size_t offset, size_t n)
 {
-	struct page *head = compound_head(page);
-	size_t v = n + offset + page_address(page) - page_address(head);
+	struct page *head;
+	size_t v = n + offset;
+
+	/*
+	 * The general case needs to access the page order in order
+	 * to compute the page size.
+	 * However, we mostly deal with order-0 pages and thus can
+	 * avoid a possible cache line miss for requests that fit all
+	 * page orders.
+	 */
+	if (n <= v && v <= PAGE_SIZE)
+		return true;
+
+	head = compound_head(page);
+	v += (page - head) << PAGE_SHIFT;
 
 	if (likely(n <= v && v <= (PAGE_SIZE << compound_order(head))))
 		return true;


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ