lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aa425caf-a080-3b67-22eb-15c5b5bbad85@redhat.com>
Date:   Thu, 29 Nov 2018 12:58:26 -0500
From:   Waiman Long <longman@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Yongji Xie <elohimes@...il.com>, mingo@...hat.com,
        will.deacon@....com, linux-kernel@...r.kernel.org,
        xieyongji@...du.com, zhangyu31@...du.com, liuqi16@...du.com,
        yuanlinsi01@...du.com, nixun@...du.com, lilin24@...du.com,
        Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [RFC] locking/rwsem: Avoid issuing wakeup before setting the
 reader waiter to nil

On 11/29/2018 12:27 PM, Peter Zijlstra wrote:
> On Thu, Nov 29, 2018 at 12:02:19PM -0500, Waiman Long wrote:
>> On 11/29/2018 11:06 AM, Peter Zijlstra wrote:
>>> Why; at that point we know the wakeup will happen after, which is all we
>>> require.
>>>
>> Thread 1                                  Thread 2      Thread 3
>>
>>     rwsem_down_read_failed()
>>  raw_spin_lock_irq(&sem->wait_lock);
>>  list_add_tail(&waiter.list, &wait_list);
>>  raw_spin_unlock_irq(&sem->wait_lock);
>>                                                         __rwsem_mark_wake();
>>                                                          wake_q_add();
>>                                           wake_up_q();
>>                                                          waiter->task =
>> NULL; --+
>>  while (true)
>> {                                                                 |
>>  
>> set_current_state(TASK_UNINTERRUPTIBLE);                                     
>> |
>>   if (!waiter.task) //
>> false                                                    |
>>      
>> break;                                                                    |
>>  
>> schedule();                                                                  
>> |
>>  }                                                                       
>> <-----+
>>                                                         wake_up_q(&wake_q);
> I think that thing is horribly whitespace damanaged. At least, it's not
> making sense to me.

I am sorry. It was line-wrapped by my email client. The chart was

Thread 1                       Thread 2      Thread 3
                                             down_write();
rwsem_down_read_failed()
 raw_spin_lock_irq(&sem->wait_lock);
 list_add_tail(&waiter.list, &wait_list);
 raw_spin_unlock_irq(&sem->wait_lock);
                                             __rwsem_mark_wake();
                                              wake_q_add();
                               wake_up_q();
                                              waiter->task = NULL;-+
 while (true) {                                                    |
  set_current_state(TASK_UNINTERRUPTIBLE);                         |
  if (!waiter.task) // false                                       |
      break;                                                       |
  schedule();                                                      |
 }                                                               <-+
                                             wake_up_q(&wake_q);

>> OK, I got confused by the thread racing chart shown in the patch. It
>> will be clearer if the clearing of waiter->task is moved down as shown.
>> Otherwise, moving the clearing of waiter->task before wake_q_add() won't
>> make a difference. So the patch can be a possible fix.
>>
>> Still we are talking about 3 threads racing with each other. The
>> clearing of wake_q.next in wake_up_q() is not atomic and it is hard to
>> predict the racing result of the concurrent wake_q operations between
>> threads 2 and 3. The essence of my tentative patch is to prevent the
>> concurrent wake_q operations in the first place.
> wake_up_q() should, per the barriers in wake_up_process, ensure that if
> wake_a_add() fails, there will be a wakeup of that task after that
> point.
>
> So if we put wake_up_q() at the location where wake_up_process() should
> be, it should all work.
>
> The bug in question is that it can happen at any time after
> wake_q_add(), not necessarily at wake_up_q().

OK, you convinced me. However, that can still lead to anonymous wakeups
that can be problematic if it happens in certain places. Should we try
to reduce anonymous wakeup as much as possible?

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ