lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1239979901.23397.4638.camel@laptop>
Date:	Fri, 17 Apr 2009 16:51:41 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Christoph Lameter <cl@...ux.com>
Cc:	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org
Subject: Re: Scheduler regression: Too frequent timer interrupts(?)

On Fri, 2009-04-17 at 10:29 -0400, Christoph Lameter wrote:

> > With something like that you could say, the jiffy tick went from 0.8+-.1
> > to 1.1+-.1 us or somesuch.
> 
> Well yeah we can look at this but there seem to be regressions in a lot of
> other subsystems as well. Rescheduling is another thing that we tracked.
> Its interesting that the holdoffs varied at lot during the scheduler
> transition to CFS and then stayed high after that was complete.
> 
> > After that, you could possibly use oprofile or readprofile or
> > perf-counters to get an idea where the time is spend. I did a quick
> > profile on one of my machines, and about half the kernel time spend in a
> > while(1) loop comes from __do_softirq().
> >
> > Really, I should not have to tell you this...
> 
> I can get down there but do you really want me to start hacking on the
> scheduler again? This seems to be a regression from what we had working
> fine before.

I won't mind you sending patches. But really, the first thing to do is
figuring out what is taking time.

And a random 1us cutoff, is well, random.

If you want to reduce interrupts, that's fine, but not counting an
interrupt because its below the magic 1us marker sounds a bit, well,
magic -- might work for you, might not for me on another machine, might
even be compiler dependent.

So 5 <1us interruption are not at all accounted, whereas a single 1>us
interruption is. I'd rather get rid of those 5 than try and shave a bit
of the one, if you get what I mean.

I'm pretty sure if we run the current kernel on a 5GHz machine all
interrupts are under 1us again :-), problem fixed? I don't think so.

Furthermore, yes the scheduler is one of those jiffy tick users, but
there are more. We can do ntp/gtod things in there, there is process
accounting, there is some RCU machinery, timers etc..

Like said, I did a profile on current -tip and __do_softirq was about
half the time spend in kernel. I'm not sure why it would be, maybe we're
doing tons of cache misses there for some reason, I dunno.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ