lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 23 Mar 2021 10:44:21 +0000
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Jesper Dangaard Brouer <brouer@...hat.com>,
        Chuck Lever <chuck.lever@...cle.com>
Cc:     Vlastimil Babka <vbabka@...e.cz>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Christoph Hellwig <hch@...radead.org>,
        Alexander Duyck <alexander.duyck@...il.com>,
        Matthew Wilcox <willy@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux-Net <netdev@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>,
        Linux-NFS <linux-nfs@...r.kernel.org>
Subject: Re: [PATCH 0/3 v5] Introduce a bulk order-0 page allocator

On Mon, Mar 22, 2021 at 09:18:42AM +0000, Mel Gorman wrote:
> This series is based on top of Matthew Wilcox's series "Rationalise
> __alloc_pages wrapper" and does not apply to 5.12-rc2. If you want to
> test and are not using Andrew's tree as a baseline, I suggest using the
> following git tree
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-bulk-rebase-v5r9
> 

Jesper and Chuck, would you mind rebasing on top of the following branch
please? 

git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-bulk-rebase-v6r2

The interface is the same so the rebase should be trivial.

Jesper, I'm hoping you see no differences in performance but it's best
to check.

For Chuck, this version will check for holes and scan the remainder of
the array to see if nr_pages are allocated before returning. If the holes
in the array are always at the start (which it should be for sunrpc)
then it should still be a single IRQ disable/enable. Specifically, each
contiguous hole in the array will disable/enable IRQs once. I prototyped
NFS array support and it had a 100% success rate with no sleeps running
dbench over the network with no memory pressure but that's a basic test
on a 10G switch.

The basic patch I used to convert sunrpc from using lists to an array
for testing is as follows;

diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index 922118968986..0ce33c1742d9 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -642,12 +642,10 @@ static void svc_check_conn_limits(struct svc_serv *serv)
 static int svc_alloc_arg(struct svc_rqst *rqstp)
 {
 	struct svc_serv *serv = rqstp->rq_server;
-	unsigned long needed;
 	struct xdr_buf *arg;
-	struct page *page;
 	LIST_HEAD(list);
 	int pages;
-	int i;
+	int i = 0;
 
 	pages = (serv->sv_max_mesg + 2 * PAGE_SIZE) >> PAGE_SHIFT;
 	if (pages > RPCSVC_MAXPAGES) {
@@ -657,29 +655,15 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
 		pages = RPCSVC_MAXPAGES;
 	}
 
-	for (needed = 0, i = 0; i < pages ; i++) {
-		if (!rqstp->rq_pages[i])
-			needed++;
-	}
-	i = 0;
-	while (needed) {
-		needed -= alloc_pages_bulk(GFP_KERNEL, needed, &list);
-		for (; i < pages; i++) {
-			if (rqstp->rq_pages[i])
-				continue;
-			page = list_first_entry_or_null(&list, struct page, lru);
-			if (likely(page)) {
-				list_del(&page->lru);
-				rqstp->rq_pages[i] = page;
-				continue;
-			}
+	while (i < pages) {
+		i = alloc_pages_bulk_array(GFP_KERNEL, pages, &rqstp->rq_pages[0]);
+		if (i < pages) {
 			set_current_state(TASK_INTERRUPTIBLE);
 			if (signalled() || kthread_should_stop()) {
 				set_current_state(TASK_RUNNING);
 				return -EINTR;
 			}
 			schedule_timeout(msecs_to_jiffies(500));
-			break;
 		}
 	}
 	rqstp->rq_page_end = &rqstp->rq_pages[pages];

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ