lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 8 Jun 2023 12:52:50 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Domenico Cerasuolo <cerasuolodomenico@...il.com>
Cc:     vitaly.wool@...sulko.com, minchan@...nel.org,
        senozhatsky@...omium.org, yosryahmed@...gle.com,
        linux-mm@...ck.org, ddstreet@...e.org, sjenning@...hat.com,
        nphamcs@...il.com, akpm@...ux-foundation.org,
        linux-kernel@...r.kernel.org, kernel-team@...a.com
Subject: Re: [RFC PATCH v2 1/7] mm: zswap: add pool shrinking mechanism

On Tue, Jun 06, 2023 at 04:56:05PM +0200, Domenico Cerasuolo wrote:
> @@ -584,14 +601,70 @@ static struct zswap_pool *zswap_pool_find_get(char *type, char *compressor)
>  	return NULL;
>  }
>  
> +static int zswap_shrink(struct zswap_pool *pool)
> +{
> +	struct zswap_entry *lru_entry, *tree_entry = NULL;
> +	struct zswap_header *zhdr;
> +	struct zswap_tree *tree;
> +	int swpoffset;
> +	int ret;
> +
> +	/* get a reclaimable entry from LRU */
> +	spin_lock(&pool->lru_lock);
> +	if (list_empty(&pool->lru)) {
> +		spin_unlock(&pool->lru_lock);
> +		return -EINVAL;
> +	}
> +	lru_entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
> +	list_del_init(&lru_entry->lru);
> +	zhdr = zpool_map_handle(pool->zpool, lru_entry->handle, ZPOOL_MM_RO);
> +	tree = zswap_trees[swp_type(zhdr->swpentry)];
> +	zpool_unmap_handle(pool->zpool, lru_entry->handle);
> +	/*
> +	 * Once the pool lock is dropped, the lru_entry might get freed. The
> +	 * swpoffset is copied to the stack, and lru_entry isn't deref'd again
> +	 * until the entry is verified to still be alive in the tree.
> +	 */
> +	swpoffset = swp_offset(zhdr->swpentry);
> +	spin_unlock(&pool->lru_lock);
> +
> +	/* hold a reference from tree so it won't be freed during writeback */
> +	spin_lock(&tree->lock);
> +	tree_entry = zswap_entry_find_get(&tree->rbroot, swpoffset);
> +	if (tree_entry != lru_entry) {
> +		if (tree_entry)
> +			zswap_entry_put(tree, tree_entry);
> +		spin_unlock(&tree->lock);
> +		return -EAGAIN;
> +	}
> +	spin_unlock(&tree->lock);
> +
> +	ret = zswap_writeback_entry(pool->zpool, lru_entry->handle);
> +
> +	spin_lock(&tree->lock);
> +	if (ret) {
> +		spin_lock(&pool->lru_lock);
> +		list_move(&lru_entry->lru, &pool->lru);
> +		spin_unlock(&pool->lru_lock);
> +	}
> +	zswap_entry_put(tree, tree_entry);

On re-reading this, I find the lru_entry vs tree_entry distinction
unnecessarily complicated. Once it's known that the thing coming off
the LRU is the same thing as in the tree, there is only "the entry".

How about 'entry' and 'tree_entry', and after validation use 'entry'
throughout the rest of the function?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ