[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1209093077.20936.24.camel@caritas-dev.intel.com>
Date: Fri, 25 Apr 2008 11:11:17 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Sebastian Siewior <linux-crypto@...breakpoint.cc>
Cc: Herbert Xu <herbert@...dor.apana.org.au>,
"Adam J. Richter" <adam@...drasil.com>, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-crypto@...r.kernel.org,
mingo@...e.hu, tglx@...utronix.de
Subject: Re: [PATCH -mm crypto] AES: x86_64 asm implementation optimization
Hi, Sebastian,
Thank you very much for your help. From the result you sent, the biggest
performance degradation is between step 4 and step 5. In that step, one
more register is saved before and restored after encryption/decryption.
So I think the reason maybe the read/write port throughput of CPU.
I changed the patches to group the read or write together instead of
interleaving. Can you help me to test these new patches? The new patches
is attached with the mail.
Best Regards,
Huang Ying
On Thu, 2008-04-24 at 00:32 +0200, Sebastian Siewior wrote:
> * Huang, Ying | 2008-04-17 11:36:43 [+0800]:
>
> >Hi, Sebastian,
> Hi Huang,
>
> >The files attached is the separated patches, from step1 to step 7. Thank
> >you very much for your help.
> I've run the following script:
>
> |#!/bin/bash
> |check_error()
> |{
> | r=$?
> | if [ ! $r -eq 0 ]
> | then
> | exit 1
> | fi
> |}
> |
> |modprobe tcrypt mode=200
> |modprobe tcrypt mode=200
> |dmesg -c > step-0.txt
> |
> |for ((i=1; i<=7; i++))
> |do
> | quilt push step${i}.patch
> | check_error
> |
> | make
> | check_error
> |
> | rmmod aes_x86_64
> | check_error
> |
> | insmod arch/x86/crypto/aes-x86_64.ko
> | check_error
> |
> | modprobe tcrypt mode=200
> | modprobe tcrypt mode=200
> | dmesg -c > step-${i}.txt
> |done
>
> and the result is attached.
>
> >Best Regards,
> >Huang Ying
>
> Sebastian
Download attachment "patches.tbz2" of type "application/x-bzip-compressed-tar" (1735 bytes)
Powered by blists - more mailing lists