lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 31 Jul 2019 18:32:47 +0100
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>
Cc:     Ingo Molnar <mingo@...nel.org>,
        Luca Abeni <luca.abeni@...tannapisa.it>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Valentin Schneider <Valentin.Schneider@....com>,
        Qais Yousef <Qais.Yousef@....com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/5] sched/deadline: Cleanup on_dl_rq() handling

On 7/30/19 9:21 AM, Peter Zijlstra wrote:
> On Tue, Jul 30, 2019 at 08:41:15AM +0200, Juri Lelli wrote:
>> On 29/07/19 18:49, Peter Zijlstra wrote:
>>> On Fri, Jul 26, 2019 at 09:27:55AM +0100, Dietmar Eggemann wrote:
>>>> Remove BUG_ON() in __enqueue_dl_entity() since there is already one in
>>>> enqueue_dl_entity().
>>>>
>>>> Move the check that the dl_se is not on the dl_rq from
>>>> __dequeue_dl_entity() to dequeue_dl_entity() to align with the enqueue
>>>> side and use the on_dl_rq() helper function.
>>>>
>>>> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@....com>
>>>> ---
>>>>  kernel/sched/deadline.c | 8 +++-----
>>>>  1 file changed, 3 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>>>> index 1fa005f79307..a9cb52ceb761 100644
>>>> --- a/kernel/sched/deadline.c
>>>> +++ b/kernel/sched/deadline.c
>>>> @@ -1407,8 +1407,6 @@ static void __enqueue_dl_entity(struct sched_dl_entity *dl_se)
>>>>  	struct sched_dl_entity *entry;
>>>>  	int leftmost = 1;
>>>>  
>>>> -	BUG_ON(!RB_EMPTY_NODE(&dl_se->rb_node));
>>>> -
>>>>  	while (*link) {
>>>>  		parent = *link;
>>>>  		entry = rb_entry(parent, struct sched_dl_entity, rb_node);
>>>> @@ -1430,9 +1428,6 @@ static void __dequeue_dl_entity(struct sched_dl_entity *dl_se)
>>>>  {
>>>>  	struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
>>>>  
>>>> -	if (RB_EMPTY_NODE(&dl_se->rb_node))
>>>> -		return;
>>>> -
>>>>  	rb_erase_cached(&dl_se->rb_node, &dl_rq->root);
>>>>  	RB_CLEAR_NODE(&dl_se->rb_node);
>>>>  
>>>> @@ -1466,6 +1461,9 @@ enqueue_dl_entity(struct sched_dl_entity *dl_se,
>>>>  
>>>>  static void dequeue_dl_entity(struct sched_dl_entity *dl_se)
>>>>  {
>>>> +	if (!on_dl_rq(dl_se))
>>>> +		return;
>>>
>>> Why allow double dequeue instead of WARN?
>>
>> As I was saying to Valentin, it can currently happen that a task could
>> have already been dequeued by update_curr_dl()->throttle called by
>> dequeue_task_dl() before calling __dequeue_task_dl(). Do you think we
>> should check for this condition before calling into dequeue_dl_entity()?
> 
> Yes, that's what ->dl_throttled is for, right? And !->dl_throttled &&
> !on_dl_rq() is a BUG.

OK, I will add the following snippet to the patch.
Although it's easy to provoke a situation in which DL tasks are throttled,
I haven't seen a throttling happening when the task is being dequeued.

--->8---

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index b6d2f263e0a4..a009762097fa 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1507,8 +1507,7 @@ enqueue_dl_entity(struct sched_dl_entity *dl_se,
 
 static void dequeue_dl_entity(struct sched_dl_entity *dl_se)
 {
-       if (!on_dl_rq(dl_se))
-               return;
+       BUG_ON(!on_dl_rq(dl_se));
 
        __dequeue_dl_entity(dl_se);
 }
@@ -1592,6 +1591,10 @@ static void __dequeue_task_dl(struct rq *rq, struct task_struct *p)
 static void dequeue_task_dl(struct rq *rq, struct task_struct *p, int flags)
 {
        update_curr_dl(rq);
+
+       if (p->dl.dl_throttled)
+               return;
+
        __dequeue_task_dl(rq, p);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ