lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150909151134.GU16853@twins.programming.kicks-ass.net>
Date:	Wed, 9 Sep 2015 17:11:34 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Juri Lelli <juri.lelli@....com>
Cc:	mingo@...hat.com, linux-kernel@...r.kernel.org,
	Li Zefan <lizefan@...wei.com>, cgroups@...r.kernel.org
Subject: Re: [PATCH 1/4] sched/{cpuset,core}: restore complete root_domain
 status across hotplug

On Wed, Sep 02, 2015 at 11:01:33AM +0100, Juri Lelli wrote:
> Hotplug operations are destructive w.r.t data associated with cpuset;
> in this case we care about root_domains. SCHED_DEADLINE puts bandwidth
> information regarding admitted tasks on root_domains, information that
> is gone when an hotplug operation happens. Also, it is not currently
> possible to tell to which task(s) the allocated bandwidth belongs, as
> this link is lost after sched_setscheduler() succeeds.
> 
> This patch forces rebuilding of allocated bandwidth information at
> root_domain level after cpuset_hotplug_workfn() callback is done
> setting up scheduling and root domains.

> +static void cpuset_hotplug_update_rd(void)
> +{
> +	struct cpuset *cs;
> +	struct cgroup_subsys_state *pos_css;
> +
> +	mutex_lock(&cpuset_mutex);
> +	rcu_read_lock();
> +	cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) {
> +		if (!css_tryget_online(&cs->css))
> +			continue;
> +		rcu_read_unlock();
> +
> +		update_tasks_rd(cs);
> +
> +		rcu_read_lock();
> +		css_put(&cs->css);
> +	}
> +	rcu_read_unlock();
> +	mutex_unlock(&cpuset_mutex);
> +}
> +
> +/**
>   * cpuset_hotplug_workfn - handle CPU/memory hotunplug for a cpuset
>   *
>   * This function is called after either CPU or memory configuration has
> @@ -2296,6 +2335,8 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
>  	/* rebuild sched domains if cpus_allowed has changed */
>  	if (cpus_updated)
>  		rebuild_sched_domains();
> +
> +	cpuset_hotplug_update_rd();
>  }

So the problem is that rebuild_sched_domains() destroys rd->dl_bw ? I
worry the above is racy in that you do not restore under the same
cpuset_mutex instance as you rebuild.

That is, what will stop a new task from joining the cpuset and
overloading the bandwidth between the root-domain getting rebuild and
restoring the bandwidth?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ