lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201001192958.GH3421308@ZenIV.linux.org.uk>
Date:   Thu, 1 Oct 2020 20:29:58 +0100
From:   Al Viro <viro@...iv.linux.org.uk>
To:     Alan Stern <stern@...land.harvard.edu>
Cc:     "Paul E. McKenney" <paulmck@...nel.org>, parri.andrea@...il.com,
        will@...nel.org, peterz@...radead.org, boqun.feng@...il.com,
        npiggin@...il.com, dhowells@...hat.com, j.alglave@....ac.uk,
        luc.maranget@...ia.fr, akiyks@...il.com, dlustig@...dia.com,
        joel@...lfernandes.org, linux-kernel@...r.kernel.org,
        linux-arch@...r.kernel.org
Subject: Re: Litmus test for question from Al Viro

On Thu, Oct 01, 2020 at 02:39:25PM -0400, Alan Stern wrote:

> The problem with a plain write is that it isn't guaranteed to be atomic 
> in any sense.  In principle, the compiler could generate code for CPU1 
> which would write 0 to V->A more than once.
> 
> Although I strongly doubt that any real compiler would actually do this, 
> the memory model does allow for it, out of an overabundance of caution.  

Point...  OK, not a problem - actually there will be WRITE_ONCE() for other
reasons; the real-life (pseudo-)code is
        spin_lock(&file->f_lock);
        to_free = NULL;
        head = file->f_ep;
        if (head->first == &epitem->fllink && epitem->fllink.next == NULL) {
		/* the set will go empty */
                file->f_ep = NULL;
                if (!is_file_epoll(file)) {
			/*
			 * not embedded into struct eventpoll; we want it
			 * freed unless it's on the check list, in which
			 * case we leave it for reverse path check to free.
			 */
                        v = container_of(head, struct ep_head, epitems);
                        if (!smp_load_acquire(&v->next))
                                to_free = v;
                }
        }
        hlist_del_rcu(&epitem->fllink);
        spin_unlock(file->f_lock);
        kfree(to_free);
and hlist_del_rcu() will use WRITE_ONCE() to store the updated forward links.

That goes into ep_remove() and CPU1 side of that thing is the final (set-emptying)
call.  CPU2 side is the list traversal step in reverse_path_check() and
in clear_tfile_check_list():
	// under rcu_read_lock()
        to_free = head;
        epitem = rcu_dereference(hlist_first_rcu(&head->epitems));
        if (epitem) {
                spin_lock(&epitem->file->f_lock);
                if (!hlist_empty(&head->epitems))
                        to_free = NULL;
                head->next = NULL;
                spin_unlock(&epitem->file->f_lock);
        }
        kfree(to_free);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ