[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201029202241.GA24399@pc636>
Date: Thu, 29 Oct 2020 21:22:41 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Michal Hocko <mhocko@...e.com>,
Thomas Gleixner <tglx@...utronix.de>,
"Theodore Y . Ts'o" <tytso@....edu>,
Joel Fernandes <joel@...lfernandes.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>,
urezki@...il.com
Subject: Re: [PATCH 16/16] rcu/tree: Use delayed work instead of hrtimer to
refill the cache
On Thu, Oct 29, 2020 at 09:13:42PM +0100, Uladzislau Rezki wrote:
> On Thu, Oct 29, 2020 at 12:47:24PM -0700, Paul E. McKenney wrote:
> > On Thu, Oct 29, 2020 at 05:50:19PM +0100, Uladzislau Rezki (Sony) wrote:
> > > A CONFIG_PREEMPT_COUNT is unconditionally enabled, thus a page
> > > can be obtained directly from a kvfree_rcu() path. To distinguish
> > > that and take a decision the preemptable() macro is used when it
> > > is save to enter allocator.
> > >
> > > It means that refilling a cache is not important from timing point
> > > of view. Switch to a delayed work, so the actual work is queued from
> > > the timer interrupt with 1 jiffy delay. An immediate placing a task
> > > on a current CPU can lead to rq->lock double lock. That is why a
> > > delayed method is in place.
> > >
> > > Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
> >
> > Thank you, Uladzislau!
> >
> > I applied this on top of v5.10-rc1 and got the following from the
> > single-CPU builds:
> >
> > SYNC include/config/auto.conf.cmd
> > DESCEND objtool
> > CC kernel/bounds.s
> > CALL scripts/atomic/check-atomics.sh
> > UPD include/generated/bounds.h
> > CC arch/x86/kernel/asm-offsets.s
> > In file included from ./include/asm-generic/atomic-instrumented.h:20:0,
> > from ./include/linux/atomic.h:82,
> > from ./include/linux/crypto.h:15,
> > from arch/x86/kernel/asm-offsets.c:9:
> > ./include/linux/pagemap.h: In function ‘__page_cache_add_speculative’:
> > ./include/linux/build_bug.h:30:34: error: called object is not a function or function pointer
> > #define BUILD_BUG_ON_INVALID(e) ((void)(sizeof((__force long)(e))))
> > ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > ./include/linux/mmdebug.h:45:25: note: in expansion of macro ‘BUILD_BUG_ON_INVALID’
> > #define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
> > ^~~~~~~~~~~~~~~~~~~~
> > ./include/linux/pagemap.h:207:2: note: in expansion of macro ‘VM_BUG_ON’
> > VM_BUG_ON(preemptible())
> > ^~~~~~~~~
> > scripts/Makefile.build:117: recipe for target 'arch/x86/kernel/asm-offsets.s' failed
> > make[1]: *** [arch/x86/kernel/asm-offsets.s] Error 1
> > Makefile:1199: recipe for target 'prepare0' failed
> > make: *** [prepare0] Error 2
> >
> > I vaguely recall something like this showing up in the previous series
> > and that we did something or another to address it. Could you please
> > check against the old series at -rcu branch dev.2020.10.22a? (I verified
> > that the old series does run correctly in the single-CPU scenarios.)
> >
> I see the same build error. Will double check if we have similar in the
> previous series also. It looks like the error is caused by the Thomas series.
>
> Will check!
>
OK. Found it:
urezki@...38:~/data/coding/linux.git$ git diff
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index cbfbe2bcca75..7dd3523093db 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -204,7 +204,7 @@ void release_pages(struct page **pages, int nr);
static inline int __page_cache_add_speculative(struct page *page, int count)
{
#ifdef CONFIG_TINY_RCU
- VM_BUG_ON(preemptible())
+ VM_BUG_ON(preemptible());
/*
* Preempt must be disabled here - we rely on rcu_read_lock doing
* this for us.
urezki@...38:~/data/coding/linux.git$
I guess we had some extra patch that fixes a kernel compilation for !SMP
case. Will check dev.2020.10.22a.
--
Vlad Rezki
Powered by blists - more mailing lists