lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrUc40n1LvytE-cOjRNUpgHXUnsG5y18ZBe-WYXN39vEnw@mail.gmail.com>
Date:	Wed, 17 Aug 2016 14:23:04 -0700
From:	Andy Lutomirski <luto@...capital.net>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	Denys Vlasenko <dvlasenk@...hat.com>,
	Josh Poimboeuf <jpoimboe@...hat.com>,
	Borislav Petkov <bp@...e.de>,
	"the arch/x86 maintainers" <x86@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Brian Gerst <brgerst@...il.com>,
	"H. Peter Anvin" <hpa@...or.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v3 0/7] x86: Rewrite switch_to()

On Aug 15, 2016 8:10 AM, "Ingo Molnar" <mingo@...nel.org> wrote:
>
>
> * Brian Gerst <brgerst@...il.com> wrote:
>
> > > Something like this:
> > >
> > >   taskset 1 perf stat -a -e '{instructions,cycles}' --repeat 10 perf bench sched pipe
> > >
> > > ... will give a very good idea about the general impact of these changes on
> > > context switch overhead.
> >
> > Before:
> >  Performance counter stats for 'system wide' (10 runs):
> >
> >     12,010,932,128      instructions              #    1.03  insn per
> > cycle                                              ( +-  0.31% )
> >     11,691,797,513      cycles
> >                ( +-  0.76% )
> >
> >        3.487329979 seconds time elapsed
> >           ( +-  0.78% )
> >
> > After:
> >  Performance counter stats for 'system wide' (10 runs):
> >
> >     12,097,706,506      instructions              #    1.04  insn per
> > cycle                                              ( +-  0.14% )
> >     11,612,167,742      cycles
> >                ( +-  0.81% )
> >
> >        3.451278789 seconds time elapsed
> >           ( +-  0.82% )
> >
> > The numbers with or without this patch series are roughly the same.
> > There is noticeable variation in the numbers each time I run it, so
> > I'm not sure how good of a benchmark this is.
>
> Weird, I get an order of magnitude lower noise:
>
>  triton:~/tip> taskset 1 perf stat -a -e '{instructions,cycles}' --repeat 10 perf bench sched pipe >/dev/null
>
>  Performance counter stats for 'system wide' (10 runs):
>
>     11,503,026,062      instructions              #    1.23  insn per cycle                                              ( +-  2.64% )
>      9,377,410,613      cycles                                                        ( +-  2.05% )
>
>        1.669425407 seconds time elapsed                                          ( +-  0.12% )
>
> But note that I also had '--sync' for perf stat and did a >/dev/null at the end to
> make sure no terminal output and subsequent Xorg activities interfere. Also, full
> screen terminal.
>
> Maybe try 'taskset 4' as well to put the workload on another CPU, if the first CPU
> is busier than the others?
>
> (Any Hyperthreading on your test system?)
>

I've never investigated for real, but I suspect that cgroups are a big
part of it.  If you do a regular perf recording, I think you'll find
that nearly all of the time is in the scheduler.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ