lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190125070016.j2lmcz7aidcbhznp@lt-gp.iram.es>
Date:   Fri, 25 Jan 2019 08:00:16 +0100
From:   Gabriel Paubert <paubert@...m.es>
To:     Christophe Leroy <christophe.leroy@....fr>
Cc:     Michael Ellerman <mpe@...erman.id.au>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Paul Mackerras <paulus@...ba.org>,
        Nicholas Piggin <npiggin@...il.com>,
        linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
        Mike Rapoport <rppt@...ux.ibm.com>
Subject: Re: [PATCH v13 00/10] powerpc: Switch to CONFIG_THREAD_INFO_IN_TASK

On Thu, Jan 24, 2019 at 04:58:41PM +0100, Christophe Leroy wrote:
> 
> 
> Le 24/01/2019 à 16:01, Christophe Leroy a écrit :
> > 
> > 
> > Le 24/01/2019 à 10:43, Christophe Leroy a écrit :
> > > 
> > > 
> > > On 01/24/2019 01:06 AM, Michael Ellerman wrote:
> > > > Christophe Leroy <christophe.leroy@....fr> writes:
> > > > > Le 12/01/2019 à 10:55, Christophe Leroy a écrit :
> > > > > > The purpose of this serie is to activate
> > > > > > CONFIG_THREAD_INFO_IN_TASK which
> > > > > > moves the thread_info into task_struct.
> > > > > > 
> > > > > > Moving thread_info into task_struct has the following advantages:
> > > > > > - It protects thread_info from corruption in the case of stack
> > > > > > overflows.
> > > > > > - Its address is harder to determine if stack addresses are
> > > > > > leaked, making a number of attacks more difficult.
> > > > > 
> > > > > I ran null_syscall and context_switch benchmark selftests
> > > > > and the result
> > > > > is surprising. There is slight degradation in context_switch and a
> > > > > significant one on null_syscall:
> > > > > 
> > > > > Without the serie:
> > > > > 
> > > > > ~# chrt -f 98 ./context_switch --no-altivec --no-vector --no-fp
> > > > > 55542
> > > > > 55562
> > > > > 55564
> > > > > 55562
> > > > > 55568
> > > > > ...
> > > > > 
> > > > > ~# ./null_syscall
> > > > >      2546.71 ns     336.17 cycles
> > > > > 
> > > > > 
> > > > > With the serie:
> > > > > 
> > > > > ~# chrt -f 98 ./context_switch --no-altivec --no-vector --no-fp
> > > > > 55138
> > > > > 55142
> > > > > 55152
> > > > > 55144
> > > > > 55142
> > > > > 
> > > > > ~# ./null_syscall
> > > > >      3479.54 ns     459.30 cycles
> > > > > 
> > > > > So 0,8% less context switches per second and 37% more time
> > > > > for one syscall ?
> > > > > 
> > > > > Any idea ?
> > > > 
> > > > What platform is that on?
> > > 
> > > It is on the 8xx
> 
> On the 83xx, I have a slight improvment:
> 
> Without the serie:
> 
> root@...ippro:~# ./null_syscall
>     921.44 ns     307.15 cycles
> 
> With the serie:
> 
> root@...ippro:~# ./null_syscall
>     918.78 ns     306.26 cycles
> 

The 8xx has very low cache associativity, something like 2, right?

In this case it may be access patterns which change the number of
cache line transfers when you move things around. 

Try to move things around in main(), for example allocate a variable of
~1kB on the stack in the function that performs the null_syscalls (use 
the variable before and after the loop, to avoid clever compiler
optimizations).

	Gabriel


> Christophe
> 
> > > 
> > > > 
> > > > On 64-bit we have to turn one mtmsrd into two and that's obviously a
> > > > slow down. But I don't see that you've done anything similar in 32-bit
> > > > code.
> > > > 
> > > > I assume it's patch 8 that causes the slow down?
> > > 
> > > I have not digged into it yet, but why patch 8 ?
> > > 
> > 
> > The increase of null_syscall duration happens with patch 5 when we
> > activate CONFIG_THREAD_INFO_IN_TASK.
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ