lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110114174741.GB28632@linux.vnet.ibm.com>
Date:	Fri, 14 Jan 2011 23:17:41 +0530
From:	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
To:	Rik van Riel <riel@...hat.com>
Cc:	kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
	Avi Kiviti <avi@...hat.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Mike Galbraith <efault@....de>,
	Chris Wright <chrisw@...s-sol.org>, ttracy@...hat.com,
	dshaks@...hat.com
Subject: Re: [RFC -v5 PATCH 2/4] sched: Add yield_to(task, preempt)
 functionality.

On Fri, Jan 14, 2011 at 03:03:57AM -0500, Rik van Riel wrote:
> From: Mike Galbraith <efault@....de>
> 
> Currently only implemented for fair class tasks.
> 
> Add a yield_to_task method() to the fair scheduling class. allowing the
> caller of yield_to() to accelerate another thread in it's thread group,
> task group.
> 
> Implemented via a scheduler hint, using cfs_rq->next to encourage the
> target being selected.  We can rely on pick_next_entity to keep things
> fair, so noone can accelerate a thread that has already used its fair
> share of CPU time.

If I recall correctly, one of the motivations for yield_to_task (rather than
a simple yield) was to avoid leaking bandwidth to other guests i.e we don't want
the remaining timeslice of spinning vcpu to be given away to other guests but
rather donate it to another (lock-holding) vcpu and thus retain the bandwidth
allocated to the guest.

I am not sure whether we are meeting that objective via this patch, as
lock-spinning vcpu would simply yield after setting next buddy to preferred
vcpu on target pcpu, thereby leaking some amount of bandwidth on the pcpu
where it is spinning. Would be nice to see what kind of fairness impact this 
has under heavy contention scenario.

- vatsa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ