lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 17 Apr 2009 20:58:39 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Christoph Lameter <cl@...ux.com>
Cc:	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org
Subject: Re: Scheduler regression: Too frequent timer interrupts(?)

On Fri, 2009-04-17 at 14:20 -0400, Christoph Lameter wrote:
> On Fri, 17 Apr 2009, Peter Zijlstra wrote:
> 
> > Something like this is nice to compare between kernels. Chris'
> > suggestion of timing a simple fixed loop:
> >
> > $ time (let i=1000000; while [ $i -gt 0 ]; do let i--; done)
> >
> > real    0m14.389s
> > user    0m13.787s
> > sys     0m0.498s
> >
> > Is also useful, since it gives an absolute measure of time available to
> > user-space.
> >
> > Although I suspect a simple C while(i--); might be better due to less
> > code involved.
> 
> The absolute time available to user space is not that important. It is
> important that the processor stays available during latency critical
> operations and is not taken away by the OS. The intervals that the OS
> takes the processor away determine the mininum interval that the
> application has to react to events (f.e. RDMA transfers via Infiniband,
> or operations on requests coming in via shared memory). These operations
> often must occur in parallel on multiple cores. Processing is delayed if
> any of the cores encounters a delay due to OS noise.

So you have hard deadlines in the order of us? Are these SCHED_FIFO
tasks or SCHED_OTHER?

> The latencytest code simulates a busy processor (no system calls, all
> memory is prefaulted). For some reasons Linux is increasingly taking time
> away from such processes (that intentionally run uncontended on a
> dedicated processor). This causes regressions so that current upstream is
> not usable for these applications.
> 
> It would be best for these applications if the processor would be left
> undisturbed. There is likely not much that the OS needs to do on a busy
> processor if there are no competing threads and if there is no I/O taking
> place.

Agreed -- that would be nice.

I can't really match the pattern to anything. The one thing that worried
me was sched_clock_tick(), which does a GTOD to sync the clocks between
cpus.

Your Xeon is a core2 class machine and should have relatively stable
TSC, however its also a dual socket, which I think defeats the
stable-ness.

What clocksource do you have?

cat /sys/devices/system/clocksource/clocksource0/current_clocksource

Thing is, that doesn't really match the .23 is expensive and .25 isn't.

Also, looking over the rest of the scheduler tick code, I can't really
see what would be so expensive.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ