lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMzpN2hSoebz26z+ithTCc-PRGpDEcLX2Q=63rgW3h9d9y+3vw@mail.gmail.com>
Date:	Sun, 14 Aug 2016 10:18:18 -0400
From:	Brian Gerst <brgerst@...il.com>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	"the arch/x86 maintainers" <x86@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	"H. Peter Anvin" <hpa@...or.com>,
	Denys Vlasenko <dvlasenk@...hat.com>,
	Andy Lutomirski <luto@...capital.net>,
	Borislav Petkov <bp@...e.de>,
	Thomas Gleixner <tglx@...utronix.de>,
	Josh Poimboeuf <jpoimboe@...hat.com>
Subject: Re: [PATCH v3 0/7] x86: Rewrite switch_to()

On Sat, Aug 13, 2016 at 2:45 PM, Ingo Molnar <mingo@...nel.org> wrote:
>
> * Brian Gerst <brgerst@...il.com> wrote:
>
>> On Sat, Aug 13, 2016 at 1:16 PM, Linus Torvalds
>> <torvalds@...ux-foundation.org> wrote:
>> > On Sat, Aug 13, 2016 at 9:38 AM, Brian Gerst <brgerst@...il.com> wrote:
>> >> This patch set simplifies the switch_to() code, by moving the stack switch
>> >> code out of line into an asm stub before calling __switch_to().  This ends
>> >> up being more readable, and using the C calling convention instead of
>> >> clobbering all registers improves code generation.  It also allows newly
>> >> forked processes to construct a special stack frame to seamlessly flow
>> >> to ret_from_fork, instead of using a test and branch, or an unbalanced
>> >> call/ret.
>> >
>> > Do you have performance numbers? Is it noticeable/measurable?
>>
>> How do I measure it?  The perf documentation isn't easy to understand.
>
> Something like this:
>
>   taskset 1 perf stat -a -e '{instructions,cycles}' --repeat 10 perf bench sched pipe
>
> ... will give a very good idea about the general impact of these changes on
> context switch overhead.

Before:
 Performance counter stats for 'system wide' (10 runs):

    12,010,932,128      instructions              #    1.03  insn per
cycle                                              ( +-  0.31% )
    11,691,797,513      cycles
               ( +-  0.76% )

       3.487329979 seconds time elapsed
          ( +-  0.78% )

After:
 Performance counter stats for 'system wide' (10 runs):

    12,097,706,506      instructions              #    1.04  insn per
cycle                                              ( +-  0.14% )
    11,612,167,742      cycles
               ( +-  0.81% )

       3.451278789 seconds time elapsed
          ( +-  0.82% )

The numbers with or without this patch series are roughly the same.
There is noticeable variation in the numbers each time I run it, so
I'm not sure how good of a benchmark this is.

--
Brian Gerst

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ