lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPM31RK1xvA6pige9zrLce0JeBDpaPa_WfN-bsR9_ZJDRtn87g@mail.gmail.com>
Date:	Wed, 24 Aug 2011 14:32:35 -0700
From:	Paul Turner <pjt@...gle.com>
To:	Jan H. Schönherr <schnhrr@...tu-berlin.de>
Cc:	Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...e.hu>,
	Dipankar Sarma <dipankar@...ibm.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 2/2] sched: Handle on_list ancestor in list_add_leaf_cfs_rq()

>>
>> I think splitting the function into two parts would make the thing
>> saner, something like:
>>
>>
>>       LIST_HEAD(leaf_queue);
>>
>>       for_each_sched_entity(se) {
>>               if (se->on_rq)
>>                       break;
>>               cfs_rq = cfs_rq_of(se);
>>               enqueue_entity(cfs_rq, se, flags);
>>               flags = ENQUEUE_WAKEUP;
>>               if (cfs_rq->nr_running == 1)
>>                       leaf_add_queue(cfs_rq, &leaf_queue);
>>       }
>>       /* XXX does ->on_rq imply ->on_list ? */
>>       if (se->on_list)
>>               leaf_splice_queue(cfs_rq, &leaf_queue);
>>
>> that splits the add to queue and add queue to list part and avoids the
>> parent_cfs_rq peeking.
>
> Unfortunately that won't work. The problem here is that some of the
> traversed SEs are on_list and others aren't. And the on_list status
> of one SE is independent from other SEs. So, if we don't want to remove
> on_list elements during the traversal, we need to splice collected
> entries as soon as we find a SE that is on_list.
>
> We might get away with collecting all entries always (removing on_list entries
> temporarily) and splice them after the loop, but that would
> introduce more work than normally necessary. And we should double check
> for new concurrency issues...
>

Let me consider this part more carefully.

>
>> Now I don't really like the above because its hard to make the code go
>> away in the !FAIR_GROUP case, but maybe we can come up with something
>> for that.
>
> Hmmm... you might want to reconsider my original approach to solve this:
> http://lkml.org/lkml/2011/7/18/86
>
> That might have been the cleanest one in this respect.
>
> Paul Turner did not like the introduced in-order removal, but the
> out-of-order removal is causing most problems.
>

Sorry for the delayed reply -- I owe you some feedback on the updated
versions but have been buried with other work.

What I didn't like about the original approach was specifically the
positional dependence on enqueue/dequeue.  If we can't do the splicing
properly then I think we want something like:
https://lkml.org/lkml/2011/7/18/348 to avoid shooting ourselves in the
future later.

See: https://lkml.org/lkml/2011/7/19/178 for why this should be cheap.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ