lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <plt72kbiee2sz32mqslvhmmlny6dqfeccnf2d325cus45qpo3t@m6t563ijkvr5>
Date: Thu, 10 Apr 2025 16:38:35 +0200
From: Jan Kara <jack@...e.cz>
To: Luis Chamberlain <mcgrof@...nel.org>
Cc: brauner@...nel.org, jack@...e.cz, tytso@....edu, 
	adilger.kernel@...ger.ca, linux-ext4@...r.kernel.org, riel@...riel.com, dave@...olabs.net, 
	willy@...radead.org, hannes@...xchg.org, oliver.sang@...el.com, david@...hat.com, 
	axboe@...nel.dk, hare@...e.de, david@...morbit.com, djwong@...nel.org, 
	ritesh.list@...il.com, linux-fsdevel@...r.kernel.org, linux-block@...r.kernel.org, 
	linux-mm@...ck.org, gost.dev@...sung.com, p.raghav@...sung.com, da.gomez@...sung.com
Subject: Re: [PATCH v2 2/8] fs/buffer: try to use folio lock for pagecache
 lookups

On Wed 09-04-25 18:49:39, Luis Chamberlain wrote:
> From: Davidlohr Bueso <dave@...olabs.net>
> 
> Callers of __find_get_block() may or may not allow for blocking
> semantics, and is currently assumed that it will not. Layout
> two paths based on this. Ultimately the i_private_lock scheme will
> be used as a fallback in non-blocking contexts. Otherwise
> always take the folio lock instead. The suggested trylock idea
> is implemented, thereby potentially reducing i_private_lock
> contention in addition to later enabling future migration support
> around with large folios and noref migration.
> 
> No change in semantics. All lookup users are non-blocking.
> 
> Signed-off-by: Davidlohr Bueso <dave@...olabs.net>
> Signed-off-by: Luis Chamberlain <mcgrof@...nel.org>

...

> @@ -204,7 +195,19 @@ __find_get_block_slow(struct block_device *bdev, sector_t block)
>  	if (IS_ERR(folio))
>  		goto out;
>  
> -	spin_lock(&bd_mapping->i_private_lock);
> +	/*
> +	 * Folio lock protects the buffers. Callers that cannot block
> +	 * will fallback to serializing vs try_to_free_buffers() via
> +	 * the i_private_lock.
> +	 */
> +	if (!folio_trylock(folio)) {
> +		if (atomic) {
> +			spin_lock(&bd_mapping->i_private_lock);
> +			folio_locked = false;
> +		} else
> +			folio_lock(folio);
> +	}

Ewww, this is going to be pain. You will mostly use the folio_trylock() for
protecting the lookup, except when some insane workload / fuzzer manages to
trigger the other path which will lead to completely unreproducible bugs...
I'd rather do:

	if (atomic) {
		spin_lock(&bd_mapping->i_private_lock);
		folio_locked = false;
	} else {
		folio_lock(folio);
	}

I'd actually love to do something like:

	if (atomic) {
		if (!folio_trylock(folio))
			bail...
	} else {
		folio_lock(folio);
	}

but that may be just too radical this point and would need some serious
testing how frequent the trylock failures are. No point in blocking this
series with it. So just go with the deterministic use of i_private_lock for
atomic users for now.

								Honza

> +
>  	head = folio_buffers(folio);
>  	if (!head)
>  		goto out_unlock;
> @@ -236,7 +239,10 @@ __find_get_block_slow(struct block_device *bdev, sector_t block)
>  		       1 << blkbits);
>  	}
>  out_unlock:
> -	spin_unlock(&bd_mapping->i_private_lock);
> +	if (folio_locked)
> +		folio_unlock(folio);
> +	else
> +		spin_unlock(&bd_mapping->i_private_lock);
>  	folio_put(folio);
>  out:
>  	return ret;
> @@ -1388,14 +1394,15 @@ lookup_bh_lru(struct block_device *bdev, sector_t block, unsigned size)
>   * it in the LRU and mark it as accessed.  If it is not present then return
>   * NULL
>   */
> -struct buffer_head *
> -__find_get_block(struct block_device *bdev, sector_t block, unsigned size)
> +static struct buffer_head *
> +find_get_block_common(struct block_device *bdev, sector_t block,
> +			unsigned size, bool atomic)
>  {
>  	struct buffer_head *bh = lookup_bh_lru(bdev, block, size);
>  
>  	if (bh == NULL) {
>  		/* __find_get_block_slow will mark the page accessed */
> -		bh = __find_get_block_slow(bdev, block);
> +		bh = __find_get_block_slow(bdev, block, atomic);
>  		if (bh)
>  			bh_lru_install(bh);
>  	} else
> @@ -1403,6 +1410,12 @@ __find_get_block(struct block_device *bdev, sector_t block, unsigned size)
>  
>  	return bh;
>  }
> +
> +struct buffer_head *
> +__find_get_block(struct block_device *bdev, sector_t block, unsigned size)
> +{
> +	return find_get_block_common(bdev, block, size, true);
> +}
>  EXPORT_SYMBOL(__find_get_block);
>  
>  /**
> -- 
> 2.47.2
> 
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ