lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1188286097.6317.12.camel@Homer.simpson.net>
Date:	Tue, 28 Aug 2007 09:28:17 +0200
From:	Mike Galbraith <efault@....de>
To:	Al Boldi <a1426z@...ab.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <peterz@...radead.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: CFS review

On Tue, 2007-08-28 at 08:23 +0300, Al Boldi wrote:
> Linus Torvalds wrote:
> > On Tue, 28 Aug 2007, Al Boldi wrote:
> > > No need for framebuffer.  All you need is X using the X.org vesa-driver.
> > > Then start gears like this:
> > >
> > >   # gears & gears & gears &
> > >
> > > Then lay them out side by side to see the periodic stallings for ~10sec.
> >
> > I don't think this is a good test.
> >
> > Why?
> >
> > If you're not using direct rendering, what you have is the X server doing
> > all the rendering, which in turn means that what you are testing is quite
> > possibly not so much about the *kernel* scheduling, but about *X-server*
> > scheduling!
> >
> > I'm sure the kernel scheduler has an impact, but what's more likely to be
> > going on is that you're seeing effects that are indirect, and not
> > necessarily at all even "good".
> >
> > For example, if the X server is the scheduling point, it's entirely
> > possible that it ends up showing effects that are more due to the queueing
> > of the X command stream than due to the scheduler - and that those
> > stalls are simply due to *that*.
> >
> > One thing to try is to run the X connection in synchronous mode, which
> > minimizes queueing issues. I don't know if gears has a flag to turn on
> > synchronous X messaging, though. Many X programs take the "[+-]sync" flag
> > to turn on synchronous mode, iirc.
> 
> I like your analysis, but how do you explain that these stalls vanish when 
> __update_curr is disabled?

When you disable __update_curr(), you're utterly destroying the
scheduler.  There may well be a scheduler connection, but disabling
__update_curr() doesn't tell you anything meaningful.  Basically, you're
letting all tasks run uninterrupted for just as long as they please
(which is why busy loops lock your box solid as a rock).  I'd suggest
gathering some sched_debug stats or something... shoot, _anything_ but
what you did :)

	-Mike

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ