lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 16 Sep 2011 01:22:37 -0700
From:	Paul Turner <pjt@...gle.com>
To:	linux-kernel@...r.kernel.org
Cc:	Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
	Vladimir Davydov <vdavydov@...allels.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Bharata B Rao <bharata@...ux.vnet.ibm.com>,
	Dhaval Giani <dhaval.giani@...il.com>,
	Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...e.hu>,
	Pavel Emelianov <xemul@...allels.com>
Subject: Re: CFS Bandwidth Control - Test results of cgroups tasks pinned
 vs unpinnede

On 09/07/11 08:20, Srivatsa Vaddagiri wrote:
> [Apologies if you get this email multiple times - there is some email
> client config issue that I am fixing up]
>
> * Paul Turner<pjt@...gle.com>  [2011-06-21 12:48:17]:
>
>> Hi Kamalesh,
>>
>> Can you see what things look like under v7?
>>
>> There's been a few improvements to quota re-distribution that should
>> hopefully help your test case.
>>
>> The remaining idle% I see on my machines appear to be a product of
>> load-balancer inefficiency.
>

Hey Srivatsa,

Thanks for taking another look at this -- sorry for the delayed reply!

> which is quite a complex problem to solve! I am still surprised that
> we can't handle 32 cpuhogs on a 16-cpu system very easily. The tasks seem to
> hop around madly rather than settle down as 2 tasks/cpu. Kamalesh, can you post
> the exact count of migrations we saw on latest tip over a 20-sec window?
>
> Anyway, here's a "hack" to minimize the idle time induced due to load-balance
> issues. It brings down idle time from 7+% to ~0% ..I am not too happy about
> this, but I don't see any other simpler solutions to solve the idle time issue
> completely (other than making load-balancer completely fair!).

Hum,

So BWC returns bandwidth on voluntary sleep to the parent, so the most 
we can really lose is NR_CPUS * 1ms (how much a cpu keeps in case the 
entity re-wakes up quickly).  Technically we could lose another few ms 
if there's not enough BW left to bother distributing and we're near the 
end of the period; but I think that works out to another 6ms or so at 
worst.

As discussed in the long thread dangling off this; it's load-balance 
that's at fault -- allowing steal time is just hiding this by instead 
letting cpus run over quota within a period.

If you for example set-up a deadline oriented test that tried to 
accomplish the same amount of work (without bandwidth limits) and threw 
away the rest of the work when it reached period expiration (a benchmark 
I've been meaning to write and publish as a more general load-balance 
test actually); then I suspect we'd see similar problems; and sadly, 
this case is both more representative of real-world performance and not 
fixable by something like steal-time.

So... we're probably better off trying to improve LB; I raised it in 
another reply on the chain but the NOHZ vs ticks ilb numbers look pretty 
compelling as an area for improvement in this regard.

Thanks!

- Paul
>
> --
>
> Fix excessive idle time reported when cgroups are capped.  The patch
> introduces the notion of "steal" (or "grace") time which is the surplus
> time/bandwidth each cgroup is allowed to consume, subject to a maximum
> steal time (sched_cfs_max_steal_time_us). Cgroups are allowed this "steal"
> or "grace" time when the lone task running on a cpu is about to be throttled.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ