[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1405364908.2970.729.camel@schen9-DESK>
Date: Mon, 14 Jul 2014 12:08:28 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Herbert Xu <herbert@...dor.apana.org.au>,
"H. Peter Anvin" <hpa@...or.com>,
"David S.Miller" <davem@...emloft.net>,
Ingo Molnar <mingo@...nel.org>,
Chandramouli Narayanan <mouli@...ux.intel.com>,
Vinodh Gopal <vinodh.gopal@...el.com>,
James Guilford <james.guilford@...el.com>,
Wajdi Feghali <wajdi.k.feghali@...el.com>,
Jussi Kivilinna <jussi.kivilinna@....fi>,
linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 6/7] sched: add function nr_running_cpu to expose
number of tasks running on cpu
On Mon, 2014-07-14 at 20:17 +0200, Peter Zijlstra wrote:
> On Mon, Jul 14, 2014 at 10:05:34AM -0700, Tim Chen wrote:
> > I was trying to explain why the algorithm is implemented this way
> > because of its batching nature.
> >
> > There is a whole class of async algorithm that can provide
> > substantial speedup by doing batch processing and uses workqueue.
> > The multi-buffer sha1 version has 2.2x speedup over existing
> > AVX2 version, and can have even more speedup when AVX3
> > comes round. Workqueue is a natural way to implement
> > this. I don't think a throughput speedup of 2.2x is "crap".
> >
> > We are not inventing anything new, but ask for a
> > very simple helper function to know if there's something else
> > running on our cpu to help us make a better decision
> > of whether we should flush the batched jobs immediately.
> >
> > And also asynchronous crypto interface is already used substantially
> > in crypto and has a well established infrastructure.
>
> The crap I was talking about is that there's a metric ton of 'async'
> interfaces all different.
Async interfaces when used appropriately, actually speed things up
substantially for crypto. We actually have a case with
ecyrptfs not using the async crypto interface, causing cpu to stall
and slowing things down substantially with AES-NI. And async interface
with workqueue speed things up (30% to 35% on encryption with SSD).
http://marc.info/?l=ecryptfs-users&m=136520541407248
http://www.spinics.net/lists/ecryptfs/msg00228.html
>
> Your multi-buffer thing isn't generic either, it seems lmiited to sha1.
We actually have many other multi-buffer crypto algorithms already
published for encryption and other IPSec usages. So
multi-buffer algorithm is not just limited to SHA1.
We hope to port those to the kernel crypto library eventually.
http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/fast-multi-buffer-ipsec-implementations-ia-processors-paper.pdf
> It does not reuse padata,
padata tries to speed things up by parallelizing jobs to *multiple*
cpus. Whereas multi-buffer tries to speed things up by speeding things
up by using multiple data lanes in SIMD register in a *single* cpu.
These two usages are complementary but not the same.
> it does not extend workqueues,
Why do I need to extend workqueues if the existing ones already
meet my needs?
> it does not
> remove the btrfs nonsense,
Not much I can do about btrfs as I don't understand the issues there.
> it adds yet anotehr thing.
Thanks.
Tim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists