lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 15 Jun 2022 10:15:04 +0800
From:   Edward Wu <edwardwu@...ltek.com>
To:     Andrew Morton <akpm@...ux-foundation.org>, <linux-mm@...ck.org>,
        <linux-kernel@...r.kernel.org>
CC:     <surenb@...gle.com>, <minchan@...gle.com>, <edwardwu@...ltek.com>
Subject: [PATCH] mm: cma: sync everything after EBUSY

Since file-backed memory on CMA area could take long-term pinning.

By Minchan Kim's debug commit 151e084af494 ("mm: page_alloc:
dump migrate-failed pages only at -EBUSY")
We know the pinned page comes from buffer_head, ext4 journal, FS metadata.

Sync everything after EBUSY that can unpin most file-system pages.
And raise the success rate at next time try.

Signed-off-by: Edward Wu <edwardwu@...ltek.com>
---
 mm/cma.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/mm/cma.c b/mm/cma.c
index eaa4b5c920a2..eefd725064e1 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -31,6 +31,7 @@
 #include <linux/highmem.h>
 #include <linux/io.h>
 #include <linux/kmemleak.h>
+#include <linux/syscalls.h>
 #include <trace/events/cma.h>
 
 #include "cma.h"
@@ -410,6 +411,24 @@ static void cma_debug_show_areas(struct cma *cma)
 static inline void cma_debug_show_areas(struct cma *cma) { }
 #endif
 
+void cma_sync_work(struct work_struct *work)
+{
+	ksys_sync();
+	kfree(work);
+	pr_debug("%s(): EBUSY Sync complete\n", __func__);
+}
+
+void cma_ebusy_sync_pinned_pages(void)
+{
+	struct work_struct *work;
+
+	work = kmalloc(sizeof(*work), GFP_ATOMIC);
+	if (work) {
+		INIT_WORK(work, cma_sync_work);
+		schedule_work(work);
+	}
+}
+
 /**
  * cma_alloc() - allocate pages from contiguous area
  * @cma:   Contiguous memory region for which the allocation is performed.
@@ -430,6 +449,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
 	unsigned long i;
 	struct page *page = NULL;
 	int ret = -ENOMEM;
+	bool sys_synchronized = false;
 
 	if (!cma || !cma->count || !cma->bitmap)
 		goto out;
@@ -480,6 +500,11 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
 		if (ret != -EBUSY)
 			break;
 
+		if (!sys_synchronized) {
+			sys_synchronized = true;
+			cma_ebusy_sync_pinned_pages();
+		}
+
 		pr_debug("%s(): memory range at %p is busy, retrying\n",
 			 __func__, pfn_to_page(pfn));
 
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ