lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251006072136.8236-1-hdanton@sina.com>
Date: Mon,  6 Oct 2025 15:21:08 +0800
From: Hillf Danton <hdanton@...a.com>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: brauner@...nel.org,
	viro@...iv.linux.org.uk,
	jack@...e.cz,
	linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH] fs: add missing fences to I_NEW handling

On Mon, 6 Oct 2025 03:16:43 +0200 Mateusz Guzik wrote:
> On Mon, Oct 6, 2025 at 2:57 AM Hillf Danton <hdanton@...a.com> wrote:
> > On Mon,  6 Oct 2025 01:15:26 +0200 Mateusz Guzik wrote:
> > > Suppose there are 2 CPUs racing inode hash lookup func (say ilookup5())
> > > and unlock_new_inode().
> > >
> > > In principle the latter can clear the I_NEW flag before prior stores
> > > into the inode were made visible.
> > >
> > Given difficulty following up here, could you specify why the current
> > mem barrier [1] in unlock_new_inode() is not enough?
> >
> That fence synchronizes against threads which went to sleep.
> 
> In the example I'm providing this did not happen.
> 
>   193 static inline void wait_on_inode(struct inode *inode)
>   194 {
>   195         wait_var_event(inode_state_wait_address(inode, __I_NEW),
>   196                       !(READ_ONCE(inode->i_state) & I_NEW));
> 
>   303 #define wait_var_event(var, condition)                                  \
>   304 do {                                                                    \
>   305         might_sleep();                                                  \
>   306         if (condition)                                                  \
>   307                 break;                                                  \
> 
> I_NEW is tested here without any locks or fences.
> 
Thanks, got it but given the comment of the current mem barrier in
unlock_new_inode(), why did peterZ/Av leave such a huge chance behind?

The condition check in wait_var_event() matches waitqueue_active() in
__wake_up_bit(), and both make it run fast on both the waiter and the
waker sides, no?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ