lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1247644212.7500.202.camel@twins>
Date:	Wed, 15 Jul 2009 09:50:12 +0200
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Fabio Checconi <fchecconi@...il.com>
Cc:	mingo@...e.hu, linux-kernel@...r.kernel.org,
	Gregory Haskins <ghaskins@...ell.com>
Subject: Re: [RFC][PATCH 0/8] Use EDF to throttle RT task groups

On Thu, 2009-07-09 at 15:51 +0200, Fabio Checconi wrote:

> I was thinking about doing things gradually: first extend throttling
> to handle generic periods, then extend the push-pull logic (I think you
> are referring to it with load-balancing) to fully support it, and then
> think about global EDF.  I think it would be difficult to do all the
> things at one time.

Agreed.

> About minimal concurrency group scheduling, I am not sure of how we
> would handle CPUs hot insertion/extraction, or how the allocation would
> be done efficiently (avoiding bin-packing issues) online inside the kernel.

Right, since the current interface specifies bandwidth in a single-cpu
normalized fashion, adding/removing cpus will only affect the total
bandwidth available, but should not affect the bandwidth calculations.

So it should not break anything, but it sure might surprise, then again,
hotplug is an explicit action on behalf of the admin, so he pretty much
gets what he asked for :-)

I might have to re-read that mim-concurrency G-EDF paper again, but I
failed to spot the bin-packing issue.

> To adapt the current load-balancer to the choices of the deadline-based
> scheduler I was thinking about using a cpupri-like structure per task_group,
> but now I'm not able to estimate the resulting overhead...

Right, per task_group sounds about the right level for the FIFO
balancer. It gets a little more complicated due to having a dynamic
number of vcpus being served at any one time though.

This will also lead to extra task migrations, but sure, whatever works
first, then make it better.

> Do you think that this gradual approach makes sense?

Yeah it does ;-)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ