lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181012083010.GV9130@localhost.localdomain>
Date:   Fri, 12 Oct 2018 10:30:10 +0200
From:   Juri Lelli <juri.lelli@...hat.com>
To:     luca abeni <luca.abeni@...tannapisa.it>
Cc:     Peter Zijlstra <peterz@...radead.org>, mingo@...hat.com,
        rostedt@...dmis.org, tglx@...utronix.de,
        linux-kernel@...r.kernel.org, claudio@...dence.eu.com,
        tommaso.cucinotta@...tannapisa.it, alessio.balsini@...il.com,
        bristot@...hat.com, will.deacon@....com,
        andrea.parri@...rulasolutions.com, dietmar.eggemann@....com,
        patrick.bellasi@....com, henrik@...tad.us,
        linux-rt-users@...r.kernel.org
Subject: Re: [RFD/RFC PATCH 5/8] sched: Add proxy execution

On 12/10/18 09:22, luca abeni wrote:
> On Thu, 11 Oct 2018 14:53:25 +0200
> Peter Zijlstra <peterz@...radead.org> wrote:
> 
> [...]
> > > > > +	if (rq->curr != rq->idle) {
> > > > > +		rq->proxy = rq->idle;
> > > > > +		set_tsk_need_resched(rq->idle);
> > > > > +		/*
> > > > > +		 * XXX [juril] don't we still need to migrate
> > > > > @next to
> > > > > +		 * @owner's CPU?
> > > > > +		 */
> > > > > +		return rq->idle;
> > > > > +	}  
> > > > 
> > > > If I understand well, this code ends up migrating the task only
> > > > if the CPU was previously idle? (scheduling the idle task if the
> > > > CPU was not previously idle)
> > > > 
> > > > Out of curiosity (I admit this is my ignorance), why is this
> > > > needed? If I understand well, after scheduling the idle task the
> > > > scheduler will be invoked again (because of the
> > > > set_tsk_need_resched(rq->idle)) but I do not understand why it is
> > > > not possible to migrate task "p" immediately (I would just check
> > > > "rq->curr != p", to avoid migrating the currently scheduled
> > > > task).  
> [...]
> > I think it was the safe and simple choice; note that we're not
> > migrating just a single @p, but a whole chain of @p.
> 
> Ah, that's the point I was missing... Thanks for explaining, now
> everything looks more clear!
> 
> 
> But... Here is my next dumb question: once the tasks are migrated to
> the other runqueue, what prevents the scheduler from migrating them
> back? In particular, task p: if it is (for example) a fixed priority
> task an is on this runqueue, it is probably because the FP invariant
> wants this... So, the push mechanism might end up migrating p back to
> this runqueue soon... No?

Not if p is going to be proxying for owner on owner's rq.
OTOH, I guess we might have counter-migrations generated by push/pull
decisions. Maybe we should remove potential proxies from pushable?
We'd still have the same problem for FAIR though.
In general it seems to make sense to me the fact that potential proxies
shouldn't participate to load balancing while waiting to be activated by
the mutex owner; they are basically sleeping, even if they are not.

> Another doubt: if I understand well, when a task p "blocks" on a mutex
> the proxy mechanism migrates it (and the whole chain of blocked tasks)
> to the owner's core... Right?
> Now, I understand why this is simpler to implement, but from the
> schedulability point of view shouldn't we migrate the owner to p's core
> instead?

Guess the most important reason is that we need to respect owner's
affinity, p (and the rest of the list) might have affinity mask that
doesn't work well with owner's.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ