lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 18 Oct 2020 16:01:46 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     Geert Uytterhoeven <geert@...ux-m68k.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Mike Rapoport <rppt@...nel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] lib/test_free_pages: Add basic progress indicators

On Sun, Oct 18, 2020 at 04:39:27PM +0200, Geert Uytterhoeven wrote:
> Hi Matthew,
> 
> On Sun, Oct 18, 2020 at 4:25 PM Matthew Wilcox <willy@...radead.org> wrote:
> > On Sun, Oct 18, 2020 at 04:04:45PM +0200, Geert Uytterhoeven wrote:
> > > The test module to check that free_pages() does not leak memory does not
> > > provide any feedback whatsoever its state or progress, but may take some
> > > time on slow machines.  Add the printing of messages upon starting each
> > > phase of the test, and upon completion.
> >
> > It's not supposed to take a long time.  Can you crank down that 1000 *
> 
> It took 1m11s on ARAnyM, running on an i7-8700K.
> Real hardware may even take longer.

71 seconds is clearly too long.  0.7 seconds would be fine, so 10 * 1000
would be appropriate, but then that's only 320MB which might not be
enough to notice on a modern machine.

> > 1000 to something more appropriate?
> 
> What would be a suitable value? You do want to see it "leak gigabytes
> of memory and probably OOM your system" if something's wrong,
> so decreasing the value a lot may not be a good idea?
> 
> Regardless, if it OOMs, I think you do want to see this happens
> while running this test.

How about scaling with the amount of memory on the machine?

This might cause problems on machines with terabytes of memory.
Maybe we should cap it at a terabyte?

diff --git a/lib/test_free_pages.c b/lib/test_free_pages.c
index 074e76bd76b2..aa18fa52290a 100644
--- a/lib/test_free_pages.c
+++ b/lib/test_free_pages.c
@@ -9,11 +9,11 @@
 #include <linux/mm.h>
 #include <linux/module.h>
 
-static void test_free_pages(gfp_t gfp)
+static void test_free_pages(gfp_t gfp, unsigned long totalram)
 {
-	unsigned int i;
+	unsigned long i, max = totalram / 8;
 
-	for (i = 0; i < 1000 * 1000; i++) {
+	for (i = 0; i < max; i++) {
 		unsigned long addr = __get_free_pages(gfp, 3);
 		struct page *page = virt_to_page(addr);
 
@@ -26,8 +26,11 @@ static void test_free_pages(gfp_t gfp)
 
 static int m_in(void)
 {
-	test_free_pages(GFP_KERNEL);
-	test_free_pages(GFP_KERNEL | __GFP_COMP);
+	struct sysinfo si;
+
+	si_meminfo(&si);
+	test_free_pages(GFP_KERNEL, si.totalram);
+	test_free_pages(GFP_KERNEL | __GFP_COMP, si.totalram);
 
 	return 0;
 }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ