lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADcy93U3t8acuA33QxBdwBhby=Ya1uKNA6+BMeSCBeShLywAtw@mail.gmail.com>
Date:	Wed, 4 Feb 2015 20:59:49 +0800
From:	Xunlei Pang <pang.xunlei@...aro.org>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	Xunlei Pang <xlpang@....com>, lkml <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Juri Lelli <juri.lelli@...il.com>
Subject: Re: [PATCH RESEND 1/2] sched/rt: Check to push the task when changing
 its affinity

Hi Steve,

On 4 February 2015 at 11:14, Steven Rostedt <rostedt@...dmis.org> wrote:
> On Wed,  4 Feb 2015 09:12:20 +0800
> Xunlei Pang <xlpang@....com> wrote:
>
>> From: Xunlei Pang <pang.xunlei@...aro.org>
>>
>> +              */
>> +             cpumask_copy(&p->cpus_allowed, new_mask);
>> +             p->nr_cpus_allowed = new_weight;
>> +
>> +             if (task_running(rq, p) &&
>> +                 cpumask_test_cpu(task_cpu(p), new_mask) &&
>> +                 cpupri_find(&rq->rd->cpupri, p, NULL)) {
>
> Hmm, You called cpupri_find() which should also return a mask of the
> CPUs with the lowest priorities. I wonder if we could have utilize this
> information instead of doing it twice? Of course things could change by
> the time the task migrates.

We do this if the target task is running, so we can migrate it only
by resched_cur() or stop_one_cpu(), that's what I can think of now :)
I think resched_cur() would be better.

>
>> +                     /*
>> +                      * At this point, current task gets migratable most
>> +                      * likely due to the change of its affinity, let's
>> +                      * figure out if we can migrate it.
>> +                      *
>> +                      * Is there any task with the same priority as that
>> +                      * of current task? If found one, we should resched.
>> +                      * NOTE: The target may be unpushable.
>> +                      */
>> +                     if (p->prio == rq->rt.highest_prio.next) {
>> +                             /* One target just in pushable_tasks list. */
>> +                             requeue_task_rt(rq, p, 0);
>> +                             preempt_push = 1;
>> +                     } else if (rq->rt.rt_nr_total > 1) {
>> +                             struct task_struct *next;
>> +
>> +                             requeue_task_rt(rq, p, 0);
>> +                             /* peek only */
>> +                             next = _pick_next_task_rt(rq, 1);
>> +                             if (next != p && next->prio == p->prio)
>> +                                     preempt_push = 1;
>> +                     }
>
> I'm thinking it would be better just to send an IPI to the CPU that
> figures this out and pushes a task off of itself.

My thought is that we try the best not to disturb the running task,
actually using direct push_rt_tasks() here instead of IPI is sort of
similar to that logic in task_woken_rt().

>
>> +             } else if (!task_running(rq, p))
>> +                     direct_push = 1;
>> +     }
>>
>>       /*
>>        * Only update if the process changes its state from whether it
>>        * can migrate or not.
>>        */
>> -     if ((p->nr_cpus_allowed > 1) == (weight > 1))
>> -             return;
>> -
>> -     rq = task_rq(p);
>> +     if ((old_weight > 1) == (new_weight > 1))
>> +             goto out;
>>
>>       /*
>>        * The process used to be able to migrate OR it can now migrate
>>        */
>> -     if (weight <= 1) {
>> +     if (new_weight <= 1) {
>>               if (!task_current(rq, p))
>>                       dequeue_pushable_task(rq, p);
>>               BUG_ON(!rq->rt.rt_nr_migratory);
>> @@ -1919,6 +1961,13 @@ static void set_cpus_allowed_rt(struct task_struct *p,
>>       }
>>
>>       update_rt_migration(&rq->rt);
>> +
>> +out:
>> +     if (direct_push)
>> +             push_rt_tasks(rq);
>> +
>> +     if (preempt_push)
>> +             resched_curr(rq);
>
> I don't know. This just doesn't seem clean.
>

Thanks for your time, any of your suggestions would be helpful.

Regards,
Xunlei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ