lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180814164905.GG2605@e110439-lin>
Date:   Tue, 14 Aug 2018 17:49:05 +0100
From:   Patrick Bellasi <patrick.bellasi@....com>
To:     Dietmar Eggemann <dietmar.eggemann@....com>
Cc:     linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Tejun Heo <tj@...nel.org>,
        "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Paul Turner <pjt@...gle.com>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Todd Kjos <tkjos@...gle.com>,
        Joel Fernandes <joelaf@...gle.com>,
        Steve Muckle <smuckle@...gle.com>,
        Suren Baghdasaryan <surenb@...gle.com>
Subject: Re: [PATCH v3 03/14] sched/core: uclamp: add CPU's clamp groups
 accounting

Hi Dietmar!

On 14-Aug 17:44, Dietmar Eggemann wrote:
> On 08/06/2018 06:39 PM, Patrick Bellasi wrote:

[...]

> >+/**
> >+ * uclamp_cpu_put_id(): decrease reference count for a clamp group on a CPU
> >+ * @p: the task being dequeued from a CPU
> >+ * @cpu: the CPU from where the clamp group has to be released
> >+ * @clamp_id: the utilization clamp (e.g. min or max utilization) to release
> >+ *
> >+ * When a task is dequeued from a CPU's RQ, the CPU's clamp group reference
> >+ * counted by the task is decreased.
> >+ * If this was the last task defining the current max clamp group, then the
> >+ * CPU clamping is updated to find the new max for the specified clamp
> >+ * index.
> >+ */
> >+static inline void uclamp_cpu_put_id(struct task_struct *p,
> >+				     struct rq *rq, int clamp_id)
> >+{
> >+	struct uclamp_group *uc_grp;
> >+	struct uclamp_cpu *uc_cpu;
> >+	unsigned int clamp_value;
> >+	int group_id;
> >+
> >+	/* No task specific clamp values: nothing to do */
> >+	group_id = p->uclamp[clamp_id].group_id;
> >+	if (group_id == UCLAMP_NOT_VALID)
> >+		return;
> >+
> >+	/* Decrement the task's reference counted group index */
> >+	uc_grp = &rq->uclamp.group[clamp_id][0];
> >+#ifdef SCHED_DEBUG
> >+	if (unlikely(uc_grp[group_id].tasks == 0)) {
> >+		WARN(1, "invalid CPU[%d] clamp group [%d:%d] refcount\n",
> >+		     cpu_of(rq), clamp_id, group_id);
> >+		uc_grp[group_id].tasks = 1;
> >+	}
> >+#endif
> 
> This one indicates that there are some holes in your ref-counting.

Not really, this has been added not because I've detected a refcount
issue... but because it was suggested as a possible safety check in a
previous code review comment:

   https://lore.kernel.org/lkml/20180720151156.GA31421@e110439-lin/

> It's probably easier to debug that there is still a task but the
> uc_grp[group_id].tasks value == 0 (A). I assume the other problem exists as
> well, i.e. last task and uc_grp[group_id].tasks > 1 (B)?
> 
> You have uclamp_cpu_[get/put](_id)() in [enqueue/dequeue]_task.
> 
> Patch 04/14 introduces its use in uclamp_task_update_active().
> 
> Do you know why (A) (and (B)) are happening?

I've never saw that warning in my tests so far so, again, the warning
is there just to support testing/debugging when refcounting code
is/will be touched in the future. That's also the reason why is
SCHED_DEBUG protected.

> >+	uc_grp[group_id].tasks -= 1;
> >+
> >+	/* If this is not the last task, no updates are required */
> >+	if (uc_grp[group_id].tasks > 0)
> >+		return;
> >+
> >+	/*
> >+	 * Update the CPU only if this was the last task of the group
> >+	 * defining the current clamp value.
> >+	 */
> >+	uc_cpu = &rq->uclamp;
> >+	clamp_value = uc_grp[group_id].value;
> >+	if (clamp_value >= uc_cpu->value[clamp_id])
> 
> 'clamp_value > uc_cpu->value[clamp_id]' should indicate another
> inconsistency in the uclamp machinery, right?

Here you right, I would say that it should always be:

    clamp_value <= uc_cpu->value[clamp_id]

since this matches the update done at the end of uclamp_cpu_get_id():

   if (uc_cpu->value[clamp_id] < clamp_value)
        uc_cpu->value[clamp_id] = clamp_value;

Perhaps we can add another safety check here, similar to the one
above, to have something like:

    clamp_value = uc_grp[group_id].value;
#ifdef SCHED_DEBUG
    if (unlikely(clamp_value > uc_cpu->value[clamp_id])) {
        WARN(1, "invalid CPU[%d] clamp group [%d:%d] value\n",
                cpu_of(rq), clamp_id, group_id);
#endif
    if (clamp_value == uc_cpu->value[clamp_id])
        uclamp_cpu_update(rq, clamp_id);

-- 
#include <best/regards.h>

Patrick Bellasi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ