[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070709223950.GA29653@elte.hu>
Date: Tue, 10 Jul 2007 00:39:50 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Willy Tarreau <w@....eu>
Cc: linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Galbraith <efault@....de>,
Arjan van de Ven <arjan@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Dmitry Adamushko <dmitry.adamushko@...il.com>,
Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
Subject: Re: [patch] CFS scheduler, -v19
* Willy Tarreau <w@....eu> wrote:
> > The biggest user-visible change in -v19 is reworked sleeper
> > fairness: it's similar in behavior to -v18 but works more
> > consistently across nice levels. Fork-happy workloads (like kernel
> > builds) should behave better as well. There are also a handful of
> > speedups: unsigned math, 32-bit speedups, O(1) task pickup,
> > debloating and other micro-optimizations.
>
> Interestingly, I also noticed the possibility of O(1) task pickup when
> playing with v18, but did not detect any noticeable improvement with
> it. Of course, it depends on the workload and I probably didn't
> perform the most relevant tests.
yeah - it's a small tweak. CFS is O(31) in sleep/wakeup so it's now all
a big O(1) family again :)
> V19 works very well here on 2.6.20.14. I could start 32k busy loops at
> nice +19 (I exhausted the 32k pids limit), and could still perform
> normal operations. I noticed that 'vmstat' scans all the pid entries
> under /proc, which takes ages to collect data before displaying a
> line. Obviously, the system sometimes shows some short latencies, but
> not much more than what you get from and SSH through a remote DSL
> connection.
great! I did not try to push it this far, yet.
> Here's a vmstat 1 output :
>
> r b w swpd free buff cache si so bi bo in cs us sy id
> 32437 0 0 0 809724 488 6196 0 0 1 0 135 0 24 72 4
> 32436 0 0 0 811336 488 6196 0 0 0 0 717 0 78 22 0
crazy :-)
> Amusingly, I started mpg123 during this test and it skipped quite a
> bit. After setting all tasks to SCHED_IDLE, it did not skip anymore.
> All this seems to behave like one could expect.
yeah. It behaves better than i expected in fact - 32K tasks is pushing
things quite a bit. (we've got a 32K PID limit for example)
> I also started 30k processes distributed in 130 groups of 234 chained
> by pipes in which one byte is passed. I get an average of 8000 in the
> run queue. The context switch rate is very low and sometimes even null
> in this test, maybe some of them are starving, I really do not know :
>
> r b w swpd free buff cache si so bi bo in cs us sy id
> 7752 0 1 0 656892 244 4196 0 0 0 0 725 0 16 84 0
hm, could you profile this? We could have some bottleneck somewhere
(likely not in the scheduler) with that many tasks being runnable. [
With CFS you can actually run a profiler under this workload ;-) ]
> In my tree, I have replaced the rbtree with the ebtree we talked
> about, but it did not bring any performance boost because, eventhough
> insert() and delete() are faster, the scheduler is already quite good
> at avoiding them as much as possible, mostly relying on rb_next()
> which has the same cost in both trees. All in all, the only variations
> I noticed were caused by cacheline alignment when I tried to reorder
> fields in the eb_node. So I will stop my experimentations here since I
> don't see any more room for improvement.
well, just a little bit of improvement would be nice to have too :)
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists