lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87wt0xhg7i.wl%takeuchi_satoru@jp.fujitsu.com>
Date:	Sat, 31 Mar 2007 19:15:13 +0900
From:	Satoru Takeuchi <takeuchi_satoru@...fujitsu.com>
To:	Mike Galbraith <efault@....de>
Cc:	Satoru Takeuchi <takeuchi_satoru@...fujitsu.com>,
	Linux Kernel <linux-kernel@...r.kernel.org>,
	Ingo Molnar <mingo@...e.hu>
Subject: Re: [BUG] scheduler: strange behavor with massive interactive	processes

Hi Mike,

> I puttered around with your testcase a bit, and didn't see interactive
> tasks starving other interactive tasks so much as plain old interactive
> tasks starving expired tasks, which they're supposed to be able to do,

I inserted a trace code observing all context switches into the kernel and
confirmed that less than 10 processes having max prio certainly run
continuously and the others (having max - 1 prio) can run only at the
beggining of the program or when runqueue are expired (the chance is about
once a 200 secs in the 200 [procs/CPU] case, and their CPU time is deprived
in only 1 ticks) on each CPUs.

> Interactivity still seems to be fine with reasonable non-interactive
> loads despite ~reserving more bandwidth for non-interactive tasks.  Only
> lightly tested, YMMV, and of course the standard guarantee applies ;)

I've only seen your patch briefly and cant' make accurate comment yet. For
the time time being, however, I examined the test which is same as my initial
mail.

Test environment
================

 - kernel: 2.6.21-rc5 with or without Mike's patch
 - others: same as my initial mail except for omitting nice 19 cases

Result (without Mike's patch)
=============================

  +---------+-----------+------+------+------+--------+
  |   # of  |   # of    | avg  | max  | min  |  stdev |
  |   CPUs  | processes | (*1) | (*2) | (*3) |  (*4)  |
  +---------+-----------+------+------+------+--------+
  | 1(i386) |       200 |  162 | 8258 |    1 |   1113 |
  +---------+-----------+------+------+------+--------+
  |         |           |  378 | 9314 |    2 |   1421 |
  | 2(ia64) |       400 +------+------+------+--------+
  |         |           |  189 |12544 |    1 |   1443 |
  +---------+-----------+------+------+------+--------+

  *1) average number of loops among all processes
  *2) maximum number of loops among all processes
  *3) minimum number of loops among all processes
  *4) standard deviation

Result (with Mike's patch)
==========================

  +---------+-----------+------+------+------+--------+
  |   # of  |   # of    | avg  | max  | min  |  stdev |
  |   CPUs  | processes |      |      |      |        |
  +---------+-----------+------+------+------+--------+
  | 1(i386) |       200 |  154 | 1114 |    1 |    210 |
  +---------+-----------+------+------+------+--------+
  |         |           |  373 | 1328 |  108 |    246 |
  | 2(ia64) |       400 +------+------+------+--------+
  |         |           |  186 | 1169 |    1 |    211 |
  +---------+-----------+------+------+------+--------+

I also gatherd tha data, changing # of processors for the 1 CPU(i386):

  +---------+-----------+------+------+------+--------+
  |   # of  |   # of    | avg  | max  | min  |  stdev |
  |   CPUs  | processes |      |      |      |        |
  +---------+-----------+------+------+------+--------+
  |         |        25 | 1208 | 1787 |  987 |    237 |
  |         +-----------+------+------+------+--------+
  |         |        50 |  868 | 1631 |  559 |    275 |
  | 1(i386) +-----------+------+------+------+--------+
  |         |       100 |  319 | 1017 |   25 |    232 |
  |         +-----------+------+------+------+--------+
  |         |   200(*1) |  154 | 1114 |    1 |    210 |
  +---------+-----------+------+------+------+--------+

  *1) Same as the above table, just for easily comparison

It seems to highly depend on # of processes and at present, Ingo's patch
looks better.

Thanks,

Satoru
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ