lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f855a988-d5e9-4f5a-8b49-891828367ed7@amd.com>
Date: Tue, 25 Mar 2025 20:19:26 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Oleg Nesterov <oleg@...hat.com>, Eric Van Hensbergen <ericvh@...nel.org>,
	Latchesar Ionkov <lucho@...kov.net>, Dominique Martinet
	<asmadeus@...ewreck.org>, Christian Schoenebeck <linux_oss@...debyte.com>
CC: Mateusz Guzik <mjguzik@...il.com>, syzbot
	<syzbot+62262fdc0e01d99573fc@...kaller.appspotmail.com>,
	<brauner@...nel.org>, <dhowells@...hat.com>, <jack@...e.cz>,
	<jlayton@...nel.org>, <linux-fsdevel@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>, <netfs@...ts.linux.dev>,
	<swapnil.sapkal@....com>, <syzkaller-bugs@...glegroups.com>,
	<viro@...iv.linux.org.uk>, <v9fs@...ts.linux.dev>
Subject: Re: [syzbot] [netfs?] INFO: task hung in netfs_unbuffered_write_iter

Hello Oleg, Dominique,

On 3/25/2025 6:34 PM, Oleg Nesterov wrote:
> On 03/25, K Prateek Nayak wrote:
>>
>> I chased this down to p9_client_rpc() net/9p/client.c specifically:
>>
>>          err = c->trans_mod->request(c, req);
>>          if (err < 0) {
>>                  /* write won't happen */
>>                  p9_req_put(c, req);
>>                  if (err != -ERESTARTSYS && err != -EFAULT)
>>                          c->status = Disconnected;
>>                  goto recalc_sigpending;
>>          }
>>
>> c->trans_mod->request() calls p9_fd_request() in net/9p/trans_fd.c
>> which basically does a p9_fd_poll().
> 
> Again, I know nothing about 9p... but if p9_fd_request() returns
> an err < 0, then it comes from p9_conn->err and p9_fd_request()
> does nothing else.
> 
>> Previously, the above would fail with err as -EIO which would
>> cause the client to "Disconnect" and the retry logic would make
>> progress. Now however, the err returned is -ERESTARTSYS
> 
> OK... So p9_conn->err = -ERESTARTSYS was set by p9_conn_cancel()
> called by p9_write_work() because pipe_write() returns ERESTARTSYS?
> 
> But I don't understand -EIO with the reverted commit aaec5a95d59615

Okay it is a long explanation. These are the relevant bits of
traces with aaec5a95d59615 reverted on mainline and I'll break it
down to the best of my abilities:

    kworker/100:1-1803    [100] .....   286.618784: p9_write_work: Data write wait 32770
    kworker/100:1-1803    [100] .....   286.618792: p9_write_work: Data written 28672
    kworker/100:1-1803    [100] .....   286.618793: p9_fd_poll: p9_fd_poll rd poll
    kworker/100:1-1803    [100] .....   286.618793: p9_fd_poll: p9_fd_request wr poll
    kworker/100:1-1803    [100] .....   286.618793: p9_fd_poll: p9_fd_poll rd poll
    kworker/100:1-1803    [100] .....   286.618793: p9_fd_poll: p9_fd_request wr poll
    kworker/100:1-1803    [100] .....   286.618795: p9_read_work: Data read wait 7
    kworker/100:1-1803    [100] .....   286.618796: p9_read_work: Data read 7
    kworker/100:1-1803    [100] .....   286.618796: p9_fd_poll: p9_fd_poll rd poll
    kworker/100:1-1803    [100] .....   286.618796: p9_fd_poll: p9_fd_request wr poll
    kworker/100:1-1803    [100] .....   286.618797: p9_fd_poll: p9_fd_poll rd poll
    kworker/100:1-1803    [100] .....   286.618797: p9_fd_poll: p9_fd_request wr poll
    kworker/100:1-1803    [100] .....   286.618797: p9_read_work: Data read wait 55
    kworker/100:1-1803    [100] .....   286.618798: p9_read_work: Data read 55
            repro-4102    [100] .....   286.618802: p9_client_write: p9_client_rpc done
            repro-4102    [100] .....   286.618804: p9_client_write: p9_pdup
            repro-4102    [100] .....   286.618804: p9_client_write: iter revert

   ==== the above bits are common between the two ===

            repro-4102    [100] .....   286.618805: p9_client_write: p9_client_rpc
            repro-4102    [100] .....   286.618817: p9_fd_request: p9_fd_request
            repro-4102    [100] .....   286.618818: p9_fd_request: p9_fd_request EPOLL
            repro-4102    [100] .....   286.618818: p9_fd_poll: p9_fd_poll rd poll
            repro-4102    [100] .....   286.618818: p9_fd_poll: p9_fd_request wr poll
    kworker/100:1-1803    [100] .....   286.618822: p9_fd_poll: p9_fd_poll rd poll
    kworker/100:1-1803    [100] .....   286.618822: p9_fd_poll: p9_fd_request wr poll
    kworker/100:1-1803    [100] .....   286.618823: p9_read_work: Data read wait 7
    kworker/100:1-1803    [100] .....   286.618825: p9_read_work: Data read 7
    kworker/100:1-1803    [100] .....   286.618825: p9_read_work: p9_tag_lookup error

                                                                  ^^^^^^^^^^^^^^^^^^^
   ==== In this case the  read succeeds but p9_tag_lookup() later will error out ====
   ==== Connection is cancelled with m->err as -EIO                              ====

    kworker/100:1-1803    [100] .....   286.618825: p9_read_work: Connection cancel
    kworker/100:1-1803    [100] ...1.   286.618826: p9_conn_cancel: Connection cancelled (-5)
            repro-4102    [100] .....   286.618831: p9_client_write: p9_client_rpc done
            repro-4102    [100] .....   286.618832: p9_client_write: p9_client_rpc error (-5)

                                                                                          ^^^
   ==== This error from kworker/100 is received by the reproducer                         ====

            repro-4102    [100] .....   286.618833: v9fs_issue_write: Issue write done 2 err(-5)
            repro-4102    [100] .....   286.618833: netfs_write_subrequest_terminated: Collector woken up from netfs_write_subrequest_terminated
            repro-4102    [100] .....   286.618834: netfs_wake_write_collector: Wake collector
            repro-4102    [100] .....   286.618834: netfs_wake_write_collector: Queuing collector work
            repro-4102    [100] .....   286.618837: v9fs_issue_write: Issue write subrequest terminated 2
            repro-4102    [100] .....   286.618838: netfs_unbuffered_write: netfs_unbuffered_write
            repro-4102    [100] .....   286.618838: netfs_end_issue_write: netfs_end_issue_write
            repro-4102    [100] .....   286.618838: netfs_end_issue_write: Write collector need poke 0
            repro-4102    [100] .....   286.618839: netfs_unbuffered_write_iter_locked: Waiting on NETFS_RREQ_IN_PROGRESS!

   ==== A reissue mechanism triggers below ====

  kworker/u1037:2-3232    [096] .....   286.618845: netfs_retry_writes: netfs_reissue_write 1
  kworker/u1037:2-3232    [096] .....   286.618846: v9fs_issue_write: v9fs_issue_write
  kworker/u1037:2-3232    [096] .....   286.618847: p9_client_write: p9_client_rpc
  kworker/u1037:2-3232    [096] .....   286.618873: p9_fd_request: p9_fd_request
  kworker/u1037:2-3232    [096] ...1.   286.618874: p9_fd_request: p9_fd_request error
  kworker/u1037:2-3232    [096] .....   286.618874: p9_client_rpc: Client disconnected (no write)

   ==== Connection with client is disconnected ====

  kworker/u1037:2-3232    [096] .....   286.618877: p9_client_write: p9_client_rpc done
  kworker/u1037:2-3232    [096] .....   286.618880: p9_client_write: p9_client_rpc error (-5)
  kworker/u1037:2-3232    [096] .....   286.618881: v9fs_issue_write: Issue write done 0 err(-5)
  kworker/u1037:2-3232    [096] .....   286.618882: netfs_write_subrequest_terminated: Collector woken up from netfs_write_subrequest_terminated
  kworker/u1037:2-3232    [096] .....   286.618882: netfs_wake_write_collector: Wake collector
  kworker/u1037:2-3232    [096] .....   286.618882: netfs_wake_write_collector: Queuing collector work
  kworker/u1037:2-3232    [096] .....   286.618887: v9fs_issue_write: Issue write subrequest terminated 0
  kworker/u1037:2-3232    [096] .....   286.618890: netfs_write_collection_worker: Write collect clearing and waking up!

   ==== The retry worker will clear the NETFS_RREQ_IN_PROGRESS bit and issue a wakeup ====

            repro-4102    [100] .....   286.619051: p9_client_prepare_req: p9_client_prepare_req eio 1
            repro-4102    [100] .....   286.619052: p9_client_rpc: p9_client_rpc early err return
            repro-4108    [240] .....   286.621325: p9_fd_create: Client connected

   ==== The reproducer will then re-establish the connection and the pattern repeats ====

---

So the pattern is with the optimization reverted is:

o The RPC thread reads "7" bytes.
o p9_tag_lookup() fails giving -EIO.
o connection is cancelled from rpc thread.

o The reproducer thread receives the packet and finds -EIO.
o The reproducer starts a Disconnect.
o The reproducer wakes up collector and sleeps on bit
   NETFS_RREQ_IN_PROGRESS

o A retry worker wakes up.
o Finds connection has been disconnected
o Wakes up the reproducer thread.

o Reproducer wakes up and establishes connection.

I don't know 9p bits enough to understand the intermediates steps but
this is the rough flow.

---

Now for the traces on vanilla mainline:

     kworker/31:1-1723    [031] .....    95.011362: p9_write_work: Data write wait 32770
     kworker/31:1-1723    [031] .....    95.011372: p9_write_work: Data written 28672
     kworker/31:1-1723    [031] .....    95.011373: p9_fd_poll: p9_fd_poll rd poll
     kworker/31:1-1723    [031] .....    95.011373: p9_fd_poll: p9_fd_request wr poll
     kworker/31:1-1723    [031] .....    95.011374: p9_fd_poll: p9_fd_poll rd poll
     kworker/31:1-1723    [031] .....    95.011374: p9_fd_poll: p9_fd_request wr poll
     kworker/31:1-1723    [031] .....    95.011374: p9_read_work: Data read wait 7
     kworker/31:1-1723    [031] .....    95.011375: p9_read_work: Data read 7
     kworker/31:1-1723    [031] .....    95.011375: p9_fd_poll: p9_fd_poll rd poll
     kworker/31:1-1723    [031] .....    95.011375: p9_fd_poll: p9_fd_request wr poll
     kworker/31:1-1723    [031] .....    95.011376: p9_read_work: Data read wait 55
     kworker/31:1-1723    [031] .....    95.011376: p9_read_work: Data read 55
            repro-4076    [031] .....    95.011381: p9_client_rpc: Wait event killable (0)
            repro-4076    [031] .....    95.011382: p9_client_rpc: Check error (0)
            repro-4076    [031] .....    95.011382: p9_client_write: p9_client_rpc done
            repro-4076    [031] .....    95.011382: p9_client_write: p9_pdup
            repro-4076    [031] .....    95.011382: p9_client_write: iter revert

   ==== Above bits are same as the previous traces ====

            repro-4076    [031] .....    95.011383: p9_client_write: p9_client_rpc
            repro-4076    [031] .....    95.011393: p9_fd_request: p9_fd_request
            repro-4076    [031] .....    95.011394: p9_fd_request: p9_fd_request EPOLL
            repro-4076    [031] .....    95.011394: p9_fd_poll: p9_fd_poll rd poll
            repro-4076    [031] .....    95.011394: p9_fd_poll: p9_fd_request wr poll
            repro-4076    [031] .....    99.731970: p9_client_rpc: Wait event killable (-512)

   ==== The wait event here fails with -ERESTARTSYS ====

            repro-4076    [031] .....    99.731979: p9_client_rpc: Flushed (-512)
            repro-4076    [031] .....    99.731983: p9_client_rpc: Final error (-512)
            repro-4076    [031] .....    99.731983: p9_client_write: p9_client_rpc done
            repro-4076    [031] .....    99.731984: p9_client_write: p9_client_rpc error (-512)
            repro-4076    [031] .....    99.731985: v9fs_issue_write: Issue write done 2 err(-512)
            repro-4076    [031] .....    99.731987: netfs_write_subrequest_terminated: Collector woken up from netfs_write_subrequest_terminated
            repro-4076    [031] .....    99.731987: netfs_wake_write_collector: Wake collector
            repro-4076    [031] .....    99.731988: netfs_wake_write_collector: Queuing collector work
            repro-4076    [031] .....    99.731991: v9fs_issue_write: Issue write subrequest terminated 2
            repro-4076    [031] .....    99.731991: netfs_unbuffered_write: netfs_unbuffered_write
            repro-4076    [031] .....    99.731991: netfs_end_issue_write: netfs_end_issue_write
            repro-4076    [031] .....    99.731992: netfs_end_issue_write: Write collector need poke 0
            repro-4076    [031] .....    99.731993: netfs_unbuffered_write_iter_locked: Waiting on NETFS_RREQ_IN_PROGRESS!

   ==== The reproducer waits on bit NETFS_RREQ_IN_PROGRESS ====

  kworker/u1028:3-3860    [030] .....    99.732028: netfs_retry_writes: netfs_reissue_write 1
  kworker/u1028:3-3860    [030] .....    99.732030: v9fs_issue_write: v9fs_issue_write
  kworker/u1028:3-3860    [030] .....    99.732031: p9_client_write: p9_client_rpc
  kworker/u1028:3-3860    [030] .....    99.732051: p9_fd_request: p9_fd_request
  kworker/u1028:3-3860    [030] .....    99.732052: p9_fd_request: p9_fd_request EPOLL
  kworker/u1028:3-3860    [030] .....    99.732052: p9_fd_poll: p9_fd_poll rd poll
  kworker/u1028:3-3860    [030] .....    99.732053: p9_fd_poll: p9_fd_request wr poll

   ==== The retry worker seemingly gets stuck post p9_fd_poll() waiting for wakeup ====

---

That is my analysis so far.

> 
> Oleg.
> 

-- 
Thanks and Regards,
Prateek

View attachment "debug.diff" of type "text/plain" (19614 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ