lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YmMPpaseLn4i6MYk@google.com>
Date:   Fri, 22 Apr 2022 13:27:17 -0700
From:   Minchan Kim <minchan@...nel.org>
To:     Alexey Romanov <avromanov@...rdevices.ru>,
        Sergey Senozhatsky <senozhatsky@...omium.org>
Cc:     ngupta@...are.org, senozhatsky@...omium.org,
        linux-block@...r.kernel.org, axboe@...omium.org,
        kernel@...rdevices.ru, linux-kernel@...r.kernel.org,
        mnitenko@...il.com, Dmitry Rokosov <ddrokosov@...rdevices.ru>
Subject: Re: [PATCH v1] zram: don't retry compress incompressible page

On Fri, Apr 22, 2022 at 02:59:59PM +0300, Alexey Romanov wrote:
> It doesn't make sense for us to retry to compress an uncompressible
> page (comp_len == PAGE_SIZE) in zsmalloc slowpath, because we will
> be storing it uncompressed anyway. We can avoid wasting time on
> another compression attempt. It is enough to take lock
> (zcomp_stream_get) and execute the code below.

Totally make sense. However, I'd like to discuss removing the double
compression logic entirely.

Ccing Sergey to get some opinion.

[da9556a2367c, zram: user per-cpu compression streams]

The 2nd trial allocation under per-cpu pressmption has been used to
prevent regression of allocation failure. However, it makes trouble
for maintenance without significant benefit.
(I gathered some of data from my device and writestall was just 38 for
10 days even though swap was very heap - pswpout 164831211).

Even, such 38 attempts don't guarantee 2nd trial allocation was
successful because it's timing dependent and __GFP_DIRECT_RECLAIM is
never helpful in reclaim context.

I'd like to remove the double compression logic and make it simple.
What do you think?

> 
> Signed-off-by: Alexey Romanov <avromanov@...rdevices.ru>
> Signed-off-by: Dmitry Rokosov <ddrokosov@...rdevices.ru>
> ---
>  drivers/block/zram/zram_drv.c | 15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index cb253d80d72b..bb9dd8b64176 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -1413,9 +1413,20 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
>  		handle = zs_malloc(zram->mem_pool, comp_len,
>  				GFP_NOIO | __GFP_HIGHMEM |
>  				__GFP_MOVABLE);
> -		if (handle)
> +		if (!handle)
> +			return -ENOMEM;
> +
> +		if (comp_len != PAGE_SIZE)
>  			goto compress_again;
> -		return -ENOMEM;
> +
> +		/*
> +		 * If the page is not compressible, you need to acquire the lock and
> +		 * execute the code below. The zcomp_stream_get() call is needed to
> +		 * disable the cpu hotplug and grab the zstrm buffer back.
> +		 * It is necessary that the dereferencing of the zstrm variable below
> +		 * occurs correctly.
> +		 */
> +		zstrm = zcomp_stream_get(zram->comp);
>  	}
>  
>  	alloced_pages = zs_get_total_pages(zram->mem_pool);
> -- 
> 2.30.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ