[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141007123109.GG19379@twins.programming.kicks-ass.net>
Date: Tue, 7 Oct 2014 14:31:09 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Juri Lelli <juri.lelli@....com>
Cc: "mingo@...hat.com" <mingo@...hat.com>,
"juri.lelli@...il.com" <juri.lelli@...il.com>,
"raistlin@...ux.it" <raistlin@...ux.it>,
"michael@...rulasolutions.com" <michael@...rulasolutions.com>,
"fchecconi@...il.com" <fchecconi@...il.com>,
"daniel.wagner@...-carit.de" <daniel.wagner@...-carit.de>,
"vincent@...out.info" <vincent@...out.info>,
"luca.abeni@...tn.it" <luca.abeni@...tn.it>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Li Zefan <lizefan@...wei.com>,
"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>
Subject: Re: [PATCH 2/3] sched/deadline: fix bandwidth check/update when
migrating tasks between exclusive cpusets
On Tue, Oct 07, 2014 at 09:59:54AM +0100, Juri Lelli wrote:
> Hi Peter,
>
> On 19/09/14 22:25, Peter Zijlstra wrote:
> > On Fri, Sep 19, 2014 at 10:22:40AM +0100, Juri Lelli wrote:
> >> Exclusive cpusets are the only way users can restrict SCHED_DEADLINE tasks
> >> affinity (performing what is commonly called clustered scheduling).
> >> Unfortunately, such thing is currently broken for two reasons:
> >>
> >> - No check is performed when the user tries to attach a task to
> >> an exlusive cpuset (recall that exclusive cpusets have an
> >> associated maximum allowed bandwidth).
> >>
> >> - Bandwidths of source and destination cpusets are not correctly
> >> updated after a task is migrated between them.
> >>
> >> This patch fixes both things at once, as they are opposite faces
> >> of the same coin.
> >>
> >> The check is performed in cpuset_can_attach(), as there aren't any
> >> points of failure after that function. The updated is split in two
> >> halves. We first reserve bandwidth in the destination cpuset, after
> >> we pass the check in cpuset_can_attach(). And we then release
> >> bandwidth from the source cpuset when the task's affinity is
> >> actually changed. Even if there can be time windows when sched_setattr()
> >> may erroneously fail in the source cpuset, we are fine with it, as
> >> we can't perfom an atomic update of both cpusets at once.
> >
> > The thing I cannot find is if we correctly deal with updates to the
> > cpuset. Say we first setup 2 (exclusive) sets A:cpu0 B:cpu1-3. Then
> > assign tasks and then update the cpu masks like: B:cpu2,3, A:cpu1,2.
> >
>
> So, what follows should address the problem you describe.
>
> Assuming you intended that we try to update masks as A:cpu0,3 and
> B:cpu1,2, with what below we are able to check that removing cpu3
> from B doesn't break guarantees. After that cpu3 can be put in A.
>
> Does it make any sense?
Yeah, I think that about covers is. Could you write a changelog with it?
The reason I hadn't applied your patch #2 yet is because I thought it
triggered the splat reported in this thread. But later emails seem to
suggest this is a separate/pre-existing issue?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists