lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALyZvKyFQfPFj-Dqy7ep+MzAv=LdwG+1meQOesbAL=8E3EffZA@mail.gmail.com>
Date:   Thu, 8 Sep 2022 02:54:59 +0100
From:   Jason Vas Dias <jason.vas.dias@...il.com>
To:     Jason Vas Dias <jason.vas.dias@....ie>
Cc:     David Howells <dhowells@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: SIGIO with si_code==POLL_HUP on read pipe FD with no writers?

OK, I raised bug https://bugzilla.kernel.org/show_bug.cgi?id=216458
about this and submitted this patch, which I have tested by
incorporating into the Fedora 36 kernel.spec file build, which
I updated to v5.19.7.

Now, my test program runs fine and the whole Fedora subsystem & GUI
comes up OK under the patched kernel
 - so it appears I have not broken anything so far -
I will test more thoroughly with any kernel / POSIX / I/O
test suites I can find tomorrow - can anyone recommend any ?

The patch certainly does not break any POSIX rules AFAICS ,
(rather it tries to honor the ones concerning SIGIO & POLL_HUP,
 as described in the latest linux & POSIX manual pages, better ).

I can post the RPMs to my google drive account & share them
if anyone is interested .

Please review kernel bug 216458 & consider including this patch
in a future kernel release - it works well, causes no harm, and
makes I/O programming with SIGIO & pipes much more robust
and straightforward.

Thanks, Best Regards,
Jason Vas Dias

On 05/09/2022, Jason Vas Dias <jason.vas.dias@....ie> wrote:
>
> Good day -
>
>     To the last committer & maintainers of 'linux/fs/pipe.c' :
>
>     Why isn't a SIGIO signal with POLL_HUP in 'si_code'
>     raised on an O_ASYNC, F_SETOWN{,_EX} pid F_SETSIG
>     signal owning pipe read file descriptor ?
>
>     All that happens when the write end of a pipe is
>     closed is that a SIGIO gets raised with the
>     (struct siginfo* parameter)->si_code set
>     to 1 ( POLL_IN ) , and then
>      ioctl( fd, FIONREAD, &sz)
>     then returns with sz==0 for that fd ;
>     a read() on that fd would then return 0.
>
>     Looking at pipe.c, the situation of no pipe writers
>     is detected and revents is set to contain EPOLLHUP
>     ONLY in pipe_poll(), not pipe_read() .
>
>     pipe_read() (in version 5.19) DOES detect the
>     no writers situation :
>
>     fs/pipe.c, @line 255:
> 	for (;;) {
> 		/* Read ->head with a barrier vs post_one_notification() */
>         ...
>     @line 341:
> 		if (!pipe->writers)
> 			break;
>         ...
>
>     It would be quite easy to add after the pipe_read() loop quits a clause
> as in
>     pipe_poll() , @ line 677:
> 	mask = 0;
> 	if (filp->f_mode & FMODE_READ) {
> 		if (!pipe_empty(head, tail))
> 			mask |= EPOLLIN | EPOLLRDNORM;
> 		if (!pipe->writers && filp->f_version != pipe->w_counter)
> 			mask |= EPOLLHUP;
> 	}
>
>     which does something like :
>
> 	if ( !pipe->writers )
> 		kill_fasync(&pipe->fasync_readers, SIGIO, POLL_HUP);
>
>
>     It is not nice to have to GUESS that just because
>         ioctl(fd, FIONREAD, &sz)
>     returns with sz==0 immediately after a POLL_IN event,
>     that the pipe in fact has no writers, because the
>     signal could be blocked when the ioctl call happens.
>
>     And if one happens not to try to read 0 bytes from the pipe,
>     then one would never know that no writers exist on it, and
>     could pause() infinitely waiting for a signal.
>     Or why should I have to put the FD into O_NONBLOCK mode
>     (which mine was not in) and attempt a read to return
>     0 bytes, when I know 0 bytes are available to read ?
>
>     OR, maybe in pipe_write(), @ line 595 where it does :
>
>     	kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
>
>     (which is probably where the FINAL POLL_IN signal originates)
>     it could instead do:
>         kill_fasync(&pipe->fasync_readers, SIGIO,
>                     ((ret==0) && (pipe->fasync_writers <= 1))
>                     ? POLL_HUP
>                     : POLL_IN
>                    );
>
>      It seems there are several easy ways to fix this and
>      I believe that it would make processes wanting to
>      read pipes using SIGIO much more robust & simple to code.
>
>      Processes would still be able to rely on read()s returning
>      0 in this case, but please, why can't SIGIO using processes
>      also get a definitive SIGIO with si_code==POLL_HUP, not POLL_IN ?
>
>      There appears to be similar logic that does send
>      a final POLL_HUP SIGIO when the remote write end of
>      a readable socket closes - why not for pipes ?
>
>      And the sigaction manual page states:
>      "
>        The  following values can be placed in si_code for a SIGIO/SIGPOLL
> sig‐
>        nal:
>
>            POLL_IN
>                   Data input available.
>
>            POLL_OUT
>                   Output buffers available.
>
>            POLL_MSG
>                   Input message available.
>
>            POLL_ERR
>                   I/O error.
>
>            POLL_PRI
>                   High priority input available.
>
>            POLL_HUP
>                   Device disconnected.
>      "
>      which suggests that these events should be raised for all devices -
>      it does not mention any special cases for pipe file descriptors,
>      so readers would reasonably expect a POLL_HUP event to be sent
>      on a read end of a pipe with no writers.
>
>      Please do something about this -
>      or would a patch from me that fixes this ever be
>      likely to be considered ?
>
> Thanks for any responses & Best Regards,
> Jason Vas Dias
>
>
>
>
>
>
>

View attachment "fs_pipe_c_pipe_fd_sigio_poll_hup.patch" of type "text/x-patch" (2750 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ