lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1294167430.6169.227.camel@Palantir>
Date:	Tue, 04 Jan 2011 19:57:10 +0100
From:	Dario Faggioli <raistlin@...ux.it>
To:	Lucas De Marchi <lucas.de.marchi@...il.com>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Gregory Haskins <ghaskins@...ell.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>, Mike Galbraith <efault@....de>,
	Dhaval Giani <dhaval@...is.sssup.it>,
	Fabio Checconi <fabio@...dalf.sssup.it>,
	Darren Hart <darren@...art.com>, oleg <oleg@...hat.com>,
	paulmck <paulmck@...ux.vnet.ibm.com>, pjt@...gle.com,
	bharata@...ux.vnet.ibm.co
Subject: Re: [RFC][PATCH 0/3] Refactoring sched_entity and sched_rt_entity.

On Tue, 2011-01-04 at 16:29 -0200, Lucas De Marchi wrote: 
> > Mmm... do I? While I'm cloning your git, could you elaborate a bit on
> > why, because I don't seem to see that... :-P
> 
> Suppose a RT task blocks on a PI-mutex, the lock owner will be boosted
> to RT and go through a class change in rt_mutex_setprio().
> Since now a class change reinitializes the class-specific, if fair and
> rt fields are on the same memory space, we need to save the
> sched_fair_entity before changing the class to RT and put it again
> when going back to the fair class.
> 
Well, I know, but you're deactivating+dequeueing and then
activating+enqueueing it back and forth within the proper scheduling
class, so that shouldn't be a big deal...

Actually, the point might be that forgetting something like, e.g.,
vruntime would then lead to unexpected behaviour when the task is back
to fair scheduling, but do we need to cache the whole sched_[cfs|
fair]_entity for that?

> Quoting Peter about this:
> 
>  [ Initially I was thinking not, because the task slept we'll have to
> reinsert it in the rb-tree anyway, but upon further consideration
> that'll loose the old vruntime setting, which can lead to an unseemly
> gain of time in place_entity()'s never backward check failing.
> 
> So yes, we'd have to place a copy of the old sched_entity in struct
> rt_mutex_waiter, not very hard to do. ]
> 
Ok, exactly, I now see that, and it's probably not hard to do... But I'm
now thinking of how many fields must be saved to avoid this. Surely, not
these ones:
... 
	struct rb_node          run_node;
	struct list_head        group_node;

#ifdef CONFIG_FAIR_GROUP_SCHED
	struct sched_cfs_entity *parent;
	/* rq on which this entity is (to be) queued: */
	struct cfs_rq           *cfs_rq;
	/* rq "owned" by this entity/group: */
	struct cfs_rq           *my_q;
#endif
...

I'll double check, but if it's just a matter of vruntime and a couple of
other u64, isn't it worth to just save them?

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
----------------------------------------------------------------------
Dario Faggioli, ReTiS Lab, Scuola Superiore Sant'Anna, Pisa  (Italy)

http://retis.sssup.it/people/faggioli -- dario.faggioli@...ber.org

Download attachment "signature.asc" of type "application/pgp-signature" (199 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ