[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151002135918.GN4043@linux.vnet.ibm.com>
Date: Fri, 2 Oct 2015 06:59:18 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Petr Mladek <pmladek@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>, Tejun Heo <tj@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Josh Triplett <josh@...htriplett.org>,
Thomas Gleixner <tglx@...utronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Jiri Kosina <jkosina@...e.cz>, Borislav Petkov <bp@...e.de>,
Michal Hocko <mhocko@...e.cz>, linux-mm@...ck.org,
Vlastimil Babka <vbabka@...e.cz>,
live-patching@...r.kernel.org, linux-api@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC v2 00/18] kthread: Use kthread worker API more widely
On Fri, Oct 02, 2015 at 02:00:14PM +0200, Petr Mladek wrote:
> On Thu 2015-10-01 10:00:53, Paul E. McKenney wrote:
> > On Thu, Oct 01, 2015 at 05:59:43PM +0200, Petr Mladek wrote:
> > > On Tue 2015-09-29 22:08:33, Paul E. McKenney wrote:
> > > > On Mon, Sep 21, 2015 at 03:03:41PM +0200, Petr Mladek wrote:
> > > > > My intention is to make it easier to manipulate kthreads. This RFC tries
> > > > > to use the kthread worker API. It is based on comments from the
> > > > > first attempt. See https://lkml.org/lkml/2015/7/28/648 and
> > > > > the list of changes below.
> > > > >
> > If the point of these patches was simply to test your API, and if you are
> > not looking to get them upstream, we are OK.
>
> I would like to eventually transform all kthreads into an API that
> will better define the kthread workflow. It need not be this one,
> though. I am still looking for a good API that will be acceptable[*]
>
> One of the reason that I played with RCU, khugepaged, and ring buffer
> kthreads is that they are maintained by core developers. I hope that
> it will help to get better consensus.
>
>
> > If you want them upstream, you need to explain to me why the patches
> > help something.
>
> As I said, RCU kthreads do not show a big win because they ignore
> freezer, are not parked, never stop, do not handle signals. But the
> change will allow to live patch them because they leave the main
> function on a safe place.
>
> The ring buffer benchmark is much better example. It reduced
> the main function of the consumer kthread to two lines.
> It removed some error prone code that modified task state,
> called scheduler, handled kthread_should_stop. IMHO, the workflow
> is better and more safe now.
>
> I am going to prepare and send more examples where the change makes
> the workflow easier.
>
>
> > And also how the patches avoid breaking things.
>
> I do my best to keep the original functionality. If we decide to use
> the kthread worker API, my first attempt is much more safe, see
> https://lkml.org/lkml/2015/7/28/650. It basically replaces the
> top level for cycle with one self-queuing work. There are some more
> instructions to go back to the cycle but they define a common
> safe point that will be maintained on a single location for
> all kthread workers.
>
>
> [*] I have played with two APIs yet. They define a safe point
> for freezing, parking, stopping, signal handling, live patching.
> Also some non-trivial logic of the main cycle is maintained
> on a single location.
>
> Here are some details:
>
> 1. iterant API
> --------------
>
> It allows to define three callbacks that are called the following
> way:
>
> init();
> while (!stop)
> func();
> destroy();
>
> See also https://lkml.org/lkml/2015/6/5/556.
>
> Advantages:
> + simple and clear workflow
> + simple use
> + simple conversion from the current kthreads API
>
> Disadvantages:
> + problematic solution of sleeping between events
> + completely new API
>
>
> 2. kthread worker API
> ---------------------
>
> It is similar to workqueues. The difference is that the works
> have a dedicated kthread, so we could better control the resources,
> e.g. priority, scheduling policy, ...
>
> Advantages:
> + already in use
> + design proven to work (workqueues)
> + nature way to wait for work in the common code (worker)
> using event driven works and delayed works
> + easy to convert to/from workqueues API
>
> Disadvantages:
> + more code needed to define, initialize, and queue works
> + more complicated conversion from the current API
> if we want to make it a clean way (event driven)
> + might need more synchronization in some cases[**]
>
> Questionable:
> + event driven vs. procedural programming style
> + allows more grained split of the functionality into
> separate units (works) that might be queued
> as needed
>
>
> [**] wake_up() is nope for empty waitqueue. But queuing a work
> into non-existing worker might cause a crash. Well, this is
> usually already synchronized.
>
>
> Any thoughts or preferences are highly appreciated.
For the RCU grace-period kthreads, I am not seeing the advantage of
either API over the current approach.
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists