lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y1eW09v7fgy8nu2R@google.com>
Date:   Tue, 25 Oct 2022 16:57:07 +0900
From:   Sergey Senozhatsky <senozhatsky@...omium.org>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Sergey Senozhatsky <senozhatsky@...omium.org>,
        Minchan Kim <minchan@...nel.org>,
        Nitin Gupta <ngupta@...are.org>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [PATCH 0/6] zsmalloc/zram: configurable zspage size

On (22/10/25 13:30), Sergey Senozhatsky wrote:
> On (22/10/25 01:12), Sergey Senozhatsky wrote:
> > Sergey Senozhatsky (6):
> >   zsmalloc: turn zspage order into runtime variable
> >   zsmalloc/zram: pass zspage order to zs_create_pool()
> >   zram: add pool_page_order device attribute
> >   Documentation: document zram pool_page_order attribute
> >   zsmalloc: break out of loop when found perfect zspage order
> >   zsmalloc: make sure we select best zspage size
> 
> Andrew, I want to replace the last 2 patches in the series: I think
> we can drop `usedpc` calculations and instead optimize only for `waste`
> value. Would you prefer me to resend the entire instead?

Andrew, let's do it another way - let's drop the last patch from the
series. But only the last one. The past was a last minute addition to
the series and I have not fully studied it's impact yet. From a
preliminary research I can say that it improves zsmalloc memory usage
only for order 4 zspages and has no statistically significant impact
on order 2 nor order 3 zspages.

Synthetic test, base get_pages_per_zspage() vs 'waste' optimized
get_pages_per_zspage() for order 4 zspages:

x zram-order-4-memused-base
+ zram-order-4-memused-patched
+----------------------------------------------------------------------------+
|+               +        +  +                               x xx           x|
|     |___________A_______M____|                           |____M_A______|   |
+----------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x   4 6.3960678e+08 6.3974605e+08 6.3962726e+08 6.3965082e+08     64101.637
+   4 6.3902925e+08 6.3929958e+08 6.3926682e+08 6.3919514e+08     120652.52
Difference at 95.0% confidence
	-455680 +/- 167159
	-0.0712389% +/- 0.0261329%
	(Student's t, pooled s = 96607.6)


If I will have enough confidence in that patch I will submit it
separately, with a proper commit message and clear justification.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ