[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151001154300.GD9603@pathway.suse.cz>
Date: Thu, 1 Oct 2015 17:43:00 +0200
From: Petr Mladek <pmladek@...e.com>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>, Tejun Heo <tj@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Josh Triplett <josh@...htriplett.org>,
Thomas Gleixner <tglx@...utronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Jiri Kosina <jkosina@...e.cz>, Borislav Petkov <bp@...e.de>,
Michal Hocko <mhocko@...e.cz>, linux-mm@...ck.org,
Vlastimil Babka <vbabka@...e.cz>,
live-patching@...r.kernel.org, linux-api@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC v2 17/18] rcu: Convert RCU gp kthreads into kthread worker
API
On Mon 2015-09-28 10:14:37, Paul E. McKenney wrote:
> On Mon, Sep 21, 2015 at 03:03:58PM +0200, Petr Mladek wrote:
> > Kthreads are currently implemented as an infinite loop. Each
> > has its own variant of checks for terminating, freezing,
> > awakening. In many cases it is unclear to say in which state
> > it is and sometimes it is done a wrong way.
> >
> > The plan is to convert kthreads into kthread_worker or workqueues
> > API. It allows to split the functionality into separate operations.
> > It helps to make a better structure. Also it defines a clean state
> > where no locks are taken, IRQs blocked, the kthread might sleep
> > or even be safely migrated.
> >
> > The kthread worker API is useful when we want to have a dedicated
> > single kthread for the work. It helps to make sure that it is
> > available when needed. Also it allows a better control, e.g.
> > define a scheduling priority.
> >
> > This patch converts RCU gp threads into the kthread worker API.
> > They modify the scheduling, have their own logic to bind the process.
> > They provide functions that are critical for the system to work
> > and thus deserve a dedicated kthread.
> >
> > This patch tries to split start of the grace period and the quiescent
> > state handling into separate works. The motivation is to avoid
> > wait_events inside the work. Instead it queues the works when
> > appropriate which is more typical for this API.
> >
> > On one hand, it should reduce spurious wakeups where the condition
> > in the wait_event failed and the kthread went to sleep again.
> >
> > On the other hand, there is a small race window when the other
> > work might get queued. We could detect and fix this situation
> > at the beginning of the work but it is a bit ugly.
> >
> > The patch renames the functions kthread_wake() to kthread_worker_poke()
> > that sounds more appropriate.
> >
> > Otherwise, the logic should stay the same. I did a lot of torturing
> > and I did not see any problem with the current patch. But of course,
> > it would deserve much more testing and reviewing before applying.
>
> Suppose I later need to add helper kthreads to parallelize grace-period
> initialization. How would I implement that in a freeze-friendly way?
I have been convinced that there only few kthreads that really need
freezing. See the discussion around my first attempt at
https://lkml.org/lkml/2015/6/13/190
In fact, RCU is a good example of kthreads that should not get
frozen because they are needed even later when the system
is suspended.
If I understand it correctly, they will do the job until most devices
and all non-boot CPUs are disabled. Then the task doing the suspend
will get scheduled. It will write the image and stop the machine.
RCU should not be needed by this very last step.
By other words. RCU should not be much concerned about freezing.
If you are concerned about adding more kthreads, it should be
possible to just add more workers if we agree on using the
kthreads worker API.
Best Regards,
Petr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists