lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 11 Oct 2023 15:52:38 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Ankit Jain <ankitja@...are.com>
Cc:     yury.norov@...il.com, andriy.shevchenko@...ux.intel.com,
        linux@...musvillemoes.dk, qyousef@...alina.io, pjt@...gle.com,
        joshdon@...gle.com, bristot@...hat.com, vschneid@...hat.com,
        linux-kernel@...r.kernel.org, namit@...are.com,
        amakhalov@...are.com, srinidhir@...are.com, vsirnapalli@...are.com,
        vbrahmajosyula@...are.com, akaher@...are.com,
        srivatsa@...il.mit.edu
Subject: Re: [PATCH RFC] cpumask: Randomly distribute the tasks within
 affinity mask

On Wed, Oct 11, 2023 at 01:46:42PM +0200, Peter Zijlstra wrote:

> Now, looking at the code, I don't think the current code actually
> behaves correct in this case :-(, somewhere along the line we should
> truncate cpu_valid_mask to a single bit. Let me see where the sane place
> is to do that.

Something like so I suppose, that limits newmask to the first
root_domain, which should be a superset of the cpuset if there is such a
thing.


diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 23f638d431d6..334c5bc59160 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3081,6 +3081,29 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
 	return 0;
 }
 
+static struct cpumask *root_domain_allowed(struct cpumask *newmask,
+					   struct cpumask *scratch,
+					   struct cpumask *valid)
+{
+	struct root_domain *rd;
+	struct cpumask *mask;
+	struct rq *rq;
+
+	int first = cpumask_first_and(newmask, valid);
+	if (first >= nr_cpu_ids)
+		return NULL;
+
+	mask = cpumask_of(first);
+	rd = cpu_rq(first)->rd;
+	if (rd)
+		mask = rd->span;
+
+	if (!cpumask_and(scratch, newmask, mask))
+		return NULL;
+
+	return scratch;
+}
+
 /*
  * Called with both p->pi_lock and rq->lock held; drops both before returning.
  */
@@ -3113,6 +3136,13 @@ static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
 		cpu_valid_mask = cpu_online_mask;
 	}
 
+	ctx->new_mask = root_domain_allowed(ctx->new_mask,
+					    rq->scratch_mask, cpu_valid_mask);
+	if (!ctx->new_mask) {
+		ret = -EINVAL;
+		goto out;
+	}
+
 	if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
 		ret = -EINVAL;
 		goto out;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ