lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 13 Nov 2018 11:40:54 +0100
From:   Jiri Olsa <jolsa@...hat.com>
To:     David Miller <davem@...emloft.net>
Cc:     acme@...nel.org, linux-kernel@...r.kernel.org, namhyung@...nel.org,
        jolsa@...nel.org
Subject: Re: [PATCH RFC] hist lookups

On Sun, Nov 11, 2018 at 03:32:59PM -0800, David Miller wrote:
> From: Jiri Olsa <jolsa@...hat.com>
> Date: Mon, 12 Nov 2018 00:26:27 +0100
> 
> > On Sun, Nov 11, 2018 at 03:08:01PM -0800, David Miller wrote:
> >> From: Jiri Olsa <jolsa@...hat.com>
> >> Date: Sun, 11 Nov 2018 20:41:32 +0100
> >> 
> >> > On Thu, Nov 08, 2018 at 05:07:21PM -0800, David Miller wrote:
> >> >> From: Jiri Olsa <jolsa@...hat.com>
> >> >> Date: Thu, 8 Nov 2018 08:13:03 +0100
> >> >> 
> >> >> > we could separated fork/mmaps to separate dummy event map, or just
> >> >> > parse them out in the read thread and create special queue for them
> >> >> > and drop just samples in case we are behind
> >> >> 
> >> >> What you say at the end here is basically what I am proposing.
> >> >> 
> >> >> Perf dequeues events from mmap ring as fast as possible.
> >> >> 
> >> >> Perf has two internal queues, high priority and low priority.
> >> >> 
> >> >> High priority events are never dropped.
> >> >> 
> >> >> Low priority events are dropped on overload, oldest first.
> >> > 
> >> > I added the dropping logic, it's simple so far..
> >> 
> >> So for me perf top gets into a state where the samples counter stops
> >> incrementing, but the event counter does keep moving (which is the
> >> histogram code decaying histogram entries from the display thread).
> >> 
> >> Which means the event processing has basically stopped.
> >> 
> >> The event threads are not stuck in a loop, because they respond to
> >> the "q" keypress and we can exit.
> > 
> > is the drop count showing something?
> 
> It does soon after starting up, then it drops to zero.

ok I see it on ~200 cpu server now.. we actuly spawn the
UI message box in the reader thread and wait for user to
press a key with some timeout.. which is not good ;-)

I removed that and add it to bottom line notification line
instead and now under heave load I can see lines updates
together with events being lost/drop

I also changed the lost/drop counts format to:
  lost: current/total

where current is the count within the refresh period
and total is overall counts

I pushed/rebased what I have to perf/fixes branch again

please note I had to change our compile changes, because
they wouldn't compile on x86, but I can't verify on sparc,
so you might see some compile fails again

jirka

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ