lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m2edyrnny9.fsf@gmail.com>
Date:   Tue, 12 Jul 2022 08:53:56 +0800
From:   Schspa Shi <schspa@...il.com>
To:     Steven Rostedt <rostedt@...dmis.org>
Cc:     mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        bsegall@...gle.com, mgorman@...e.de, bristot@...hat.com,
        vschneid@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 1/2] sched/rt: fix bad task migration for rt tasks


Steven Rostedt <rostedt@...dmis.org> writes:

> On Sat, 09 Jul 2022 05:32:25 +0800
> Schspa Shi <schspa@...il.com> wrote:
>
>> >> +++ b/kernel/sched/rt.c
>> >> @@ -1998,11 +1998,14 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
>> >>  			 * the mean time, task could have
>> >>  			 * migrated already or had its affinity changed.
>> >>  			 * Also make sure that it wasn't scheduled on its rq.
>> >> +			 * It is possible the task has running for a while,  
>> >
>> > I don't understand the "running for a while" part. That doesn't make sense.
>> >  
>> 
>> When I say "run for a while" I mean as long as the task has
>> run capability, we should check the migrate disabled flag again.
>> 
>> > The only way this can happen is that it was scheduled, set
>> > "migrate_disabled" and then got preempted where it's no longer on the run
>> > queue.  
>> 
>> Yes, it is the only case.
>
> Can we then change the comment, as the "running for a while" is not clear
> to what the issue is, and honestly, sounds misleading.
>
> -- Steve

How about to change this to 

			/*
			 * We had to unlock the run queue. In
			 * the mean time, task could have
			 * migrated already or had its affinity changed.
			 * Also make sure that it wasn't scheduled on its rq.
			 * It is possible the task was scheduled, set
             * "migrate_disabled" and then got preempted, And we
             * check task migration disable flag here too.
			 */

-- 
BRs
Schspa Shi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ