[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yd7VZId4IlKd4VpC@casper.infradead.org>
Date: Wed, 12 Jan 2022 13:19:32 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Charan Teja Kalla <quic_charante@...cinc.com>
Cc: Mark Hemment <markhemm@...glemail.com>, hughd@...gle.com,
Andrew Morton <akpm@...ux-foundation.org>, vbabka@...e.cz,
rientjes@...gle.com, mhocko@...e.com, surenb@...gle.com,
shakeelb@...gle.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Charan Teja Reddy <charante@...eaurora.org>
Subject: Re: [PATCH v3 RESEND] mm: shmem: implement
POSIX_FADV_[WILL|DONT]NEED for shmem
On Wed, Jan 12, 2022 at 01:51:55PM +0530, Charan Teja Kalla wrote:
> >>> + rcu_read_lock();
> >>> + xas_for_each(&xas, page, end) {
> >>> + if (!xa_is_value(page))
> >>> + continue;
> >>> + xas_pause(&xas);
> >>> + rcu_read_unlock();
> >>> +
> >>> + page = shmem_read_mapping_page(mapping, xas.xa_index);
> >>> + if (!IS_ERR(page))
> >>> + put_page(page);
> >>> +
> >>> + rcu_read_lock();
> >>> + if (need_resched()) {
> >>> + xas_pause(&xas);
> >>> + cond_resched_rcu();
> >>> + }
> >>> + }
> >>> + rcu_read_unlock();
> > Even the xarray documentation says that: If most entries found during a
> > walk require you to call xas_pause(), the xa_for_each() iterator may be
> > more appropriate.
Yes. This should obviously be an xa_for_each() loop.
> > Since every value entry found in the xarray requires me to do the
> > xas_pause(), I do agree that xa_for_each() is the appropriate call here.
> > Will switch to this in the next spin. Waiting for further review
> > comments on this patch.
>
> I also found the below documentation:
> xa_for_each() will spin if it hits a retry entry; if you intend to see
> retry entries, you should use the xas_for_each() iterator instead.
>
> Since retry entries are expected, I should be using the xas_for_each()
> with the corrections you had pointed out. Isn't it?
No. You aren't handling retry entries at all; you clearly don't
expect to see them. Just let the XArray code handle them itself.
Powered by blists - more mailing lists