lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <36851398363372@webcorp2g.yandex-team.ru>
Date:	Thu, 24 Apr 2014 22:16:12 +0400
From:	Roman Gushchin <klamm@...dex-team.ru>
To:	LKML <linux-kernel@...r.kernel.org>, mingo@...hat.com,
	peterz@...radead.org, tkhai@...dex.ru
Subject: Real-time scheduling policies and hyper-threading

Hello!


I spend some time investigating why switching runtime* tasks to real-time scheduling policies increases
response time dispersion, while the opposite is expected.

The main reason is hyper-threading. rt-scheduler tries only to load all logical CPUs, selecting topologically
closest when the current is busy. If hyper-threading is enabled, this strategy is counter-productive:
tasks are suffering on busy HT-threads when there is a plenty of idle physical cores.

Also, rt-scheduler doesn't try to balance rt load between physical CPUs. It's significant because of
turbo-boost and frequency scaling technologies: per-core performance depends on the number of
idle cores in the same physical cpu.


Are there any known solutions of this problem except disabling hyper-threading and frequency scaling at all?

Are there any common plans to enhance the load balancing algorithm in the rt-scheduler?

Does anyone use rt-scheduler for runtime-like cpu-bound tasks?


Why just don't use CFS? :-)
Rt-scheduler with modified load balancing shows much better results.
I have a prototype (still incomplete and with many dirty hacks), that shows 10-15% 
performance increase in our production.


(*) A simplified model can be described as following:
there is one process per machine, with one thread, that receives request from network and puts them into queue;
n (n ~ NCPU + 1) worker threads, that get requests from the queue and handle them.
Load is cpu-bound, tens of milliseconds per request. Typical CPU load is between 40% and 70%.
A typical system has two physical x86-64 cpus with 8-16 physical cores each (x2 with hyper-threading).


Thanks,
Roman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ