[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1405357534.2970.701.camel@schen9-DESK>
Date: Mon, 14 Jul 2014 10:05:34 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Herbert Xu <herbert@...dor.apana.org.au>,
"H. Peter Anvin" <hpa@...or.com>,
"David S.Miller" <davem@...emloft.net>,
Ingo Molnar <mingo@...nel.org>,
Chandramouli Narayanan <mouli@...ux.intel.com>,
Vinodh Gopal <vinodh.gopal@...el.com>,
James Guilford <james.guilford@...el.com>,
Wajdi Feghali <wajdi.k.feghali@...el.com>,
Jussi Kivilinna <jussi.kivilinna@....fi>,
linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 6/7] sched: add function nr_running_cpu to expose
number of tasks running on cpu
On Mon, 2014-07-14 at 18:14 +0200, Peter Zijlstra wrote:
> On Mon, Jul 14, 2014 at 09:10:14AM -0700, Tim Chen wrote:
> > On Mon, 2014-07-14 at 12:16 +0200, Peter Zijlstra wrote:
> > > On Fri, Jul 11, 2014 at 01:33:04PM -0700, Tim Chen wrote:
> > > > This function will help a thread decide if it wants to to do work
> > > > that can be delayed, to accumulate more tasks for more efficient
> > > > batch processing later.
> > > >
> > > > However, if no other tasks are running on the cpu, it can take
> > > > advantgae of the available cpu cycles to complete the tasks
> > > > for immediate processing to minimize delay, otherwise it will yield.
> > >
> > > Ugh.. and ignore topology and everything else.
> > >
> > > Yet another scheduler on top of the scheduler.
> > >
> > > We have the padata muck, also only ever used by crypto.
> > > We have the workqueue nonsense, used all over the place
> > > And we have btrfs doing their own padata like muck.
> > > And I'm sure there's at least one more out there, just because.
> > >
> > > Why do we want yet another thing?
> > >
> > > I'm inclined to go NAK and get people to reduce the amount of async
> > > queueing and processing crap.
> >
> > The mult-buffer class of crypto algorithms is by nature
> > asynchronous. The algorithm gathers several crypto jobs, and
> > put the buffer from each job in a data lane of the SIMD register.
> > This allows for parallel processing and increases throughput.
> > The gathering of the crypto jobs is an async process and
> > queuing is necessary for this class of algorithm.
>
> How is that related to me saying we've got too much of this crap
> already?
I was trying to explain why the algorithm is implemented this way
because of its batching nature.
There is a whole class of async algorithm that can provide
substantial speedup by doing batch processing and uses workqueue.
The multi-buffer sha1 version has 2.2x speedup over existing
AVX2 version, and can have even more speedup when AVX3
comes round. Workqueue is a natural way to implement
this. I don't think a throughput speedup of 2.2x is "crap".
We are not inventing anything new, but ask for a
very simple helper function to know if there's something else
running on our cpu to help us make a better decision
of whether we should flush the batched jobs immediately.
And also asynchronous crypto interface is already used substantially
in crypto and has a well established infrastructure.
Thanks.
Tim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists