[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANe_+UiX=MDSudBrfpboRoywWNnpyOwOMNETB6rjvm4cadqzTA@mail.gmail.com>
Date: Wed, 12 Jan 2022 11:34:58 +0000
From: Mark Hemment <markhemm@...glemail.com>
To: Charan Teja Kalla <quic_charante@...cinc.com>
Cc: Hugh Dickins <hughd@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>, vbabka@...e.cz,
rientjes@...gle.com, mhocko@...e.com,
Suren Baghdasaryan <surenb@...gle.com>,
Shakeel Butt <shakeelb@...gle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Charan Teja Reddy <charante@...eaurora.org>
Subject: Re: [PATCH v3 RESEND] mm: shmem: implement POSIX_FADV_[WILL|DONT]NEED
for shmem
On Wed, 12 Jan 2022 at 08:22, Charan Teja Kalla
<quic_charante@...cinc.com> wrote:
>
> Hello Mark,
>
> On 1/10/2022 3:51 PM, Charan Teja Kalla wrote:
> >>> +static int shmem_fadvise_willneed(struct address_space *mapping,
> >>> + pgoff_t start, pgoff_t long end)
> >>> +{
> >>> + XA_STATE(xas, &mapping->i_pages, start);
> >>> + struct page *page;
> >>> +
> >>> + rcu_read_lock();
> >>> + xas_for_each(&xas, page, end) {
> >>> + if (!xa_is_value(page))
> >>> + continue;
> >>> + xas_pause(&xas);
> >>> + rcu_read_unlock();
> >>> +
> >>> + page = shmem_read_mapping_page(mapping, xas.xa_index);
> >>> + if (!IS_ERR(page))
> >>> + put_page(page);
> >>> +
> >>> + rcu_read_lock();
> >>> + if (need_resched()) {
> >>> + xas_pause(&xas);
> >>> + cond_resched_rcu();
> >>> + }
> >>> + }
> >>> + rcu_read_unlock();
> >>> +
> >>> + return 0;
> >> I have a doubt on referencing xa_index after calling xas_pause().
> >> xas_pause() walks xa_index forward, so will not be the value expected
> >> for the current page.
> > Agree here. I should have the better test case to verify my changes.
> >
> >> Also, not necessary to re-call xas_pause() before cond_resched (it is
> >> a no-op).
> > In the event when CONFIG_DEBUG_ATOMIC_SLEEP is enabled users may still
> > need to call the xas_pause(), as we are dropping the rcu lock. NO?
> >
> > static inline void cond_resched_rcu(void)
> > {
> > #if defined(CONFIG_DEBUG_ATOMIC_SLEEP) || !defined(CONFIG_PREEMPT_RCU)
> > rcu_read_unlock();
> > cond_resched();
> > rcu_read_lock();
> > #endif
> > }
> >
> >> Would be better to check need_resched() before
> >> rcu_read_lock().
> > Okay, I can directly use cond_resched() if used before rcu_read_lock().
> >
> >> As this loop may call xas_pause() for most iterations, should consider
> >> using xa_for_each() instead (I *think* - still getting up to speed
> >> with XArray).
> > Even the xarray documentation says that: If most entries found during a
> > walk require you to call xas_pause(), the xa_for_each() iterator may be
> > more appropriate.
> >
> > Since every value entry found in the xarray requires me to do the
> > xas_pause(), I do agree that xa_for_each() is the appropriate call here.
> > Will switch to this in the next spin. Waiting for further review
> > comments on this patch.
>
> I also found the below documentation:
> xa_for_each() will spin if it hits a retry entry; if you intend to see
> retry entries, you should use the xas_for_each() iterator instead.
>
> Since retry entries are expected, I should be using the xas_for_each()
> with the corrections you had pointed out. Isn't it?
>
Ah, you've hit a barrier on my Xarray knowledge.
The current shmem code simply does a 'continue' on xas_retry(). Is
this different from Xarray looping internally for xas_retry()? I
assume not, but cannot give an definite answer (sorry).
Cheers,
Mark
Powered by blists - more mailing lists