[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAPcyv4hcF-tbv4OBZ4NAs00PmRAH6mE3nzSpe5=AwNORyHnLWw@mail.gmail.com>
Date: Sat, 6 Oct 2018 23:02:50 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: linux-fsdevel <linux-fsdevel@...r.kernel.org>
Cc: stable <stable@...r.kernel.org>, Jan Kara <jack@...e.cz>,
zwisler@...nel.org, Matthew Wilcox <willy@...radead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-nvdimm <linux-nvdimm@...ts.01.org>
Subject: Re: [PATCH] filesystem-dax: Fix dax_layout_busy_page() livelock
On Sat, Oct 6, 2018 at 11:14 AM Dan Williams <dan.j.williams@...el.com> wrote:
>
> In the presence of multi-order entries the typical
> pagevec_lookup_entries() pattern may loop forever:
>
> while (index < end && pagevec_lookup_entries(&pvec, mapping, index,
> min(end - index, (pgoff_t)PAGEVEC_SIZE),
> indices)) {
> ...
> for (i = 0; i < pagevec_count(&pvec); i++) {
> index = indices[i];
> ...
> }
> index++; /* BUG */
> }
>
> The loop updates 'index' for each index found and then increments to the
> next possible page to continue the lookup. However, if the last entry in
> the pagevec is multi-order then the next possible page index is more
> than 1 page away. Fix this locally for the filesystem-dax case by
> checking for dax-multi-order entries. Going forward new users of
> multi-order entries need to be similarly careful, or we need a generic
> way to report the page increment in the radix iterator.
>
> Fixes: 5fac7408d828 ("mm, fs, dax: handle layout changes to pinned dax...")
> Cc: <stable@...r.kernel.org>
> Cc: Jan Kara <jack@...e.cz>
> Cc: Ross Zwisler <zwisler@...nel.org>
> Cc: Matthew Wilcox <willy@...radead.org>
> Signed-off-by: Dan Williams <dan.j.williams@...el.com>
> ---
> fs/dax.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index 4becbf168b7f..c1472eede1f7 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -666,6 +666,8 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
> while (index < end && pagevec_lookup_entries(&pvec, mapping, index,
> min(end - index, (pgoff_t)PAGEVEC_SIZE),
> indices)) {
> + pgoff_t nr_pages = 1;
> +
> for (i = 0; i < pagevec_count(&pvec); i++) {
> struct page *pvec_ent = pvec.pages[i];
> void *entry;
> @@ -680,8 +682,11 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
>
> xa_lock_irq(&mapping->i_pages);
> entry = get_unlocked_mapping_entry(mapping, index, NULL);
> - if (entry)
> + if (entry) {
> page = dax_busy_page(entry);
> + /* account for multi-order entries */
> + nr_pages = 1UL << dax_radix_order(entry);
> + }
Thinking about this a bit further the next index will be at least
nr_pages away, but we don't want to accidentally skip over entries as
this patch might do. So nr_pages should only be adjusted by the entry
size if it is the last entry.
Powered by blists - more mailing lists