[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZNS9tqX9s7NbQq3c@lothringen>
Date: Thu, 10 Aug 2023 12:36:38 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: Marcelo Tosatti <mtosatti@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Alexander Viro <viro@...iv.linux.org.uk>,
Christian Brauner <brauner@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Christoph Hellwig <hch@....de>, Jens Axboe <axboe@...nel.dk>,
Dave Chinner <david@...morbit.com>,
Valentin Schneider <vschneid@...hat.com>,
Leonardo Bras <leobras@...hat.com>,
Yair Podemsky <ypodemsk@...hat.com>, P J P <ppandit@...hat.com>
Subject: Re: [PATCH] fs/buffer.c: disable per-CPU buffer_head cache for
isolated CPUs
On Fri, Aug 04, 2023 at 08:54:37PM -0300, Marcelo Tosatti wrote:
> > So what happens if they ever do I/O then? Like if they need to do
> > some prep work before entering an isolated critical section?
>
> Then instead of going through the per-CPU LRU buffer_head cache
> (__find_get_block), isolated CPUs will work as if their per-CPU
> cache is always empty, going through the slowpath
> (__find_get_block_slow). The algorithm is:
>
> /*
> * Perform a pagecache lookup for the matching buffer. If it's there, refresh
> * it in the LRU and mark it as accessed. If it is not present then return
> * NULL
> */
> struct buffer_head *
> __find_get_block(struct block_device *bdev, sector_t block, unsigned size)
> {
> struct buffer_head *bh = lookup_bh_lru(bdev, block, size);
>
> if (bh == NULL) {
> /* __find_get_block_slow will mark the page accessed */
> bh = __find_get_block_slow(bdev, block);
> if (bh)
> bh_lru_install(bh);
> } else
> touch_buffer(bh);
>
> return bh;
> }
> EXPORT_SYMBOL(__find_get_block);
>
> I think the performance difference between the per-CPU LRU cache
> VS __find_get_block_slow was much more significant when the cache
> was introduced. Nowadays its only 26ns (moreover modern filesystems
> do not use buffer_head's).
Sounds good then!
Acked-by: Frederic Weisbecker <frederic@...nel.org>
Thanks!
Powered by blists - more mailing lists