lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGudoHFai7eUj3W-wW_8Cb4HFhTH3a=_37kT-eftP-QZv7zdPg@mail.gmail.com>
Date: Mon, 6 Oct 2025 12:30:01 +0200
From: Mateusz Guzik <mjguzik@...il.com>
To: Hillf Danton <hdanton@...a.com>
Cc: brauner@...nel.org, viro@...iv.linux.org.uk, jack@...e.cz, 
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH] fs: add missing fences to I_NEW handling

On Mon, Oct 6, 2025 at 9:21 AM Hillf Danton <hdanton@...a.com> wrote:
>
> On Mon, 6 Oct 2025 03:16:43 +0200 Mateusz Guzik wrote:
> > That fence synchronizes against threads which went to sleep.
> >
> > In the example I'm providing this did not happen.
> >
> >   193 static inline void wait_on_inode(struct inode *inode)
> >   194 {
> >   195         wait_var_event(inode_state_wait_address(inode, __I_NEW),
> >   196                       !(READ_ONCE(inode->i_state) & I_NEW));
> >
> >   303 #define wait_var_event(var, condition)                                  \
> >   304 do {                                                                    \
> >   305         might_sleep();                                                  \
> >   306         if (condition)                                                  \
> >   307                 break;                                                  \
> >
> > I_NEW is tested here without any locks or fences.
> >
> Thanks, got it but given the comment of the current mem barrier in
> unlock_new_inode(), why did peterZ/Av leave such a huge chance behind?
>

My guess is nobody is perfect -- mistakes happen.

> The condition check in waitqueue_active() matches waitqueue_active() in
> __wake_up_bit(), and both make it run fast on both the waiter and the
> waker sides, no?

So happens the commentary above wait_var_event() explicitly mentions
you want an acquire fence:
  299  * The condition should normally use smp_load_acquire() or a
similarly
  300  * ordered access to ensure that any changes to memory made
before the
  301  * condition became true will be visible after the wait
completes.

The commentary about waitqueue_active() says you want and even
stronger fence (smp_mb) for that one:
   104  * Use either while holding wait_queue_head::lock or when used
for wakeups
   105  * with an extra smp_mb() like::
   106  *
   107  *      CPU0 - waker                    CPU1 - waiter
   108  *
   109  *                                      for (;;) {
   110  *      @cond = true;
prepare_to_wait(&wq_head, &wait, state);
   111  *      smp_mb();                         // smp_mb() from
set_current_state()
   112  *      if (waitqueue_active(wq_head))         if (@cond)
   113  *        wake_up(wq_head);                      break;
   114  *                                        schedule();
   115  *                                      }
   116  *                                      finish_wait(&wq_head, &wait);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ