lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191003181045.7fb1a5b3@gandalf.local.home>
Date:   Thu, 3 Oct 2019 18:10:45 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Daniel Bristot de Oliveira <bristot@...hat.com>,
        linux-kernel@...r.kernel.org, x86@...nel.org,
        Nadav Amit <nadav.amit@...il.com>,
        Andy Lutomirski <luto@...nel.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Song Liu <songliubraving@...com>,
        Masami Hiramatsu <mhiramat@...nel.org>
Subject: Re: [PATCH 3/3] x86/ftrace: Use text_poke()

On Wed, 2 Oct 2019 20:21:06 +0200
Peter Zijlstra <peterz@...radead.org> wrote:

> On Wed, Oct 02, 2019 at 06:35:26PM +0200, Daniel Bristot de Oliveira wrote:
> 
> > ftrace was already batching the updates, for instance, causing 3 IPIs to enable
> > all functions. The text_poke() batching also works. But because of the limited
> > buffer [ see the reply to the patch 2/3 ], it is flushing the buffer during the
> > operation, causing more IPIs than the previous code. Using the 5.4-rc1 in a VM,
> > when enabling the function tracer, I see 250+ intermediate text_poke_finish()
> > because of a full buffer...
> > 
> > Would this be the case of trying to use a dynamically allocated buffer?
> > 
> > Thoughts?  
> 
> Is it a problem? I tried growing the buffer (IIRC I made it 10 times
> bigger) and didn't see any performance improvements because of it.

I'm just worried if people are going to complain about the IPI burst.
Although, I just tried it out before applying this patch, and there's
still a bit of a burst. Not sure why. I did:

# cat /proc/interrupts > /tmp/before; echo function > /debug/tracing/current_tracer; cat /proc/interrupts > /tmp/after
# cat /proc/interrupts > /tmp/before1; echo nop > /debug/tracing/current_tracer; cat /proc/interrupts > /tmp/after1

Before this patch:

# diff /tmp/before /tmp/after
< CAL:       2342       2347       2116       2175       2446       2030       2416       2222   Function call interrupts
---
> CAL:       2462       2467       2236       2295       2446       2150       2536       2342   Function call interrupts

(Just showing the function call interrupts)

There appears to be 120 IPIs sent to all CPUS for enabling function tracer.

# diff /tmp/before1 /tmp/after1
< CAL:       2462       2467       2236       2295       2446       2150       2536       2342   Function call interrupts
---
> CAL:       2577       2582       2351       2410       2446       2265       2651       2457   Function call interrupts

And 151 IPIs for disabling it.

After applying this patch:

# diff /tmp/before /tmp/after
< CAL:      66070      46620      59955      59236      68707      63397      61644      62742   Function call interrupts
---
> CAL:      66727      47277      59955      59893      69364      64054      62301      63399   Function call interrupts

# diff /tmp/before1 /tmp/after1
< CAL:      66727      47277      59955      59893      69364      64054      62301      63399   Function call interrupts
---
> CAL:      67358      47938      59985      60554      70025      64715      62962      64060   Function call interrupts


We get 657 IPIs for enabling function tracer, and 661 for disabling it.
Funny how it's more on the disable than the enable with the patch but
the other way without it.

But still, we are going from 120 to 660 IPIs for every CPU. Not saying
it's a problem, but something that we should note. Someone (those that
don't like kernel interference) may complain.

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ