[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1810516.8hzESeGDPO@suse>
Date: Thu, 29 Jun 2023 17:04:26 +0200
From: "Fabio M. De Francesco" <fmdefrancesco@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Sumitra Sharma <sumitraartsy@...il.com>,
Hans de Goede <hdegoede@...hat.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
Ira Weiny <ira.weiny@...el.com>, Deepak R Varma <drv@...lo.com>
Subject: Re: [PATCH] fs/vboxsf: Replace kmap() with kmap_local_{page, folio}()
On giovedì 29 giugno 2023 05:16:04 CEST Matthew Wilcox wrote:
> On Thu, Jun 29, 2023 at 12:23:54AM +0200, Fabio M. De Francesco wrote:
> > > - buf = kmap(page);
> > > + do {
> >
> > Please let me understand why you are calling vboxsf_read() in a loop, a
> > PAGE_SIZE at a time.
>
> Because kmap_local_folio() can only (guarantee to) map one page at a
> time.
Yes, one page at a time. This part served to introduce the _main_ question
that is the one you answered below (i.e., since the current code maps a page a
a time with no loops, do we need to manage folios spanning more than a single
page?)
> Also vboxsf_read() is only tested with a single page at a time.
>
> > If I understand the current code it reads a single page at offset zero of
a
> > folio and then memset() with zeros from &buf[nread] up to the end of the
> > page. Then it seems that this function currently assume that the folio
> > doesn't need to be read until "offset < folio_size(folio)" becomes false.
> >
> > Does it imply that the folio is always one page sized? Doesn't it? I'm
> > surely
> > missing some basics...
>
> vboxsf does not yet claim to support large folios, so every folio that
> it sees will be only a single page in size.
> Hopefully at some point
> that will change. Again, somebody would need to test that. In the
> meantime, if someone is going to the trouble of switching over to using
> the folio API, let's actually include support for large folios.
"[...] at some point that will change." wrt larger folios spanning multiple
pages is the answer that justifies the loop. I couldn't know about this plan.
Thanks for explaining that there is a plan towards that goal.
I think that Sumitra can address the task to re-use your patch to
vboxsf_read_folio() and then properly test it with VirtualBox.
Instead the much larger effort to implement vboxsf_readahead() and actually do
an async call with deferred setting of the uptodate flag will surely require a
considerable amount of time and whoever wanted to address it would need your
guidance.
You said that you are ready to provide consult, but I'm not sure whether
Sumitra would be allowed to spend a large part of her time to do an out of
scope task (wrt her internship).
If yes, I have nothing against. If not, I'm pretty sure that someone else can
set aside enough time to address this large task ;-)
>
> > > - kunmap(page);
> > > - unlock_page(page);
> > > + if (!err) {
> > > + flush_dcache_folio(folio);
> > > + folio_mark_uptodate(folio);
> > > + }
> > > + folio_unlock(folio);
> >
> > Shouldn't we call folio_lock() to lock the folio to be able to unlock with
> > folio_unlock()?
> >
> > If so, I can't find any neither a folio_lock() or a page_lock() in this
> > filesystem.
> >
> > Again sorry for not understanding, can you please explain it?
>
> Ira gave the minimal explanation, but a slightly fuller explanation is
> that the folio is locked while it is being fetched from backing store.
This explains why I could not easily find the call to lock it.
> That prevents both a second thread from reading from it while another
> thread is bringing it uptodate, and two threads trying to bring it
> uptodate at the same time.
>
> Most filesystems have an asynchronous read_folio, so you don't see the
> folio_unlock() in the read_folio() function; instead it's in the I/O
> completion path. vboxsf is synchronous.
And this explains different implementation between synchronous and
asynchronous reads
Again thanks,
Fabio
Powered by blists - more mailing lists