lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230607141357.GA338934@cmpxchg.org>
Date:   Wed, 7 Jun 2023 10:13:57 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Yosry Ahmed <yosryahmed@...gle.com>
Cc:     Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Seth Jennings <sjenning@...hat.com>,
        Dan Streetman <ddstreet@...e.org>,
        Vitaly Wool <vitaly.wool@...sulko.com>,
        Nhat Pham <nphamcs@...il.com>,
        Domenico Cerasuolo <cerasuolodomenico@...il.com>,
        Yu Zhao <yuzhao@...gle.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: zswap: support exclusive loads

On Tue, May 30, 2023 at 09:02:51PM +0000, Yosry Ahmed wrote:
> @@ -46,6 +46,19 @@ config ZSWAP_DEFAULT_ON
>  	  The selection made here can be overridden by using the kernel
>  	  command line 'zswap.enabled=' option.
>  
> +config ZSWAP_EXCLUSIVE_LOADS
> +	bool "Invalidate zswap entries when pages are loaded"
> +	depends on ZSWAP
> +	help
> +	  If selected, when a page is loaded from zswap, the zswap entry is
> +	  invalidated at once, as opposed to leaving it in zswap until the
> +	  swap entry is freed.
> +
> +	  This avoids having two copies of the same page in memory
> +	  (compressed and uncompressed) after faulting in a page from zswap.
> +	  The cost is that if the page was never dirtied and needs to be
> +	  swapped out again, it will be re-compressed.
> +
>  choice
>  	prompt "Default compressor"
>  	depends on ZSWAP
> diff --git a/mm/frontswap.c b/mm/frontswap.c
> index 279e55b4ed87..e5d6825110f4 100644
> --- a/mm/frontswap.c
> +++ b/mm/frontswap.c
> @@ -216,8 +216,13 @@ int __frontswap_load(struct page *page)
>  
>  	/* Try loading from each implementation, until one succeeds. */
>  	ret = frontswap_ops->load(type, offset, page);
> -	if (ret == 0)
> +	if (ret == 0) {
>  		inc_frontswap_loads();
> +		if (frontswap_ops->exclusive_loads) {
> +			SetPageDirty(page);
> +			__frontswap_clear(sis, offset);
> +		}
> +	}
>  	return ret;

This would be a much more accessible feature (distro kernels,
experimenting, adapting to different workloads) if it were runtime
switchable.

That should be possible, right? As long as frontswap and zswap are
coordinated, this can be done on a per-entry basis:

	exclusive = READ_ONCE(frontswap_ops->exclusive_loads);
	ret = frontswap_ops->load(type, offset, page, exclusive);
	if (ret == 0) {
		if (exclusive) {
			SetPageDirty(page);
			__frontswap_clear(sis, offset);
		}
	}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ