lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 10 Nov 2023 23:54:48 +0900
From: "Jong eon Park" <jongeon.park@...sung.com>
To: "'Jakub Kicinski'" <kuba@...nel.org>
Cc: "'Paolo Abeni'" <pabeni@...hat.com>, "'David S. Miller'"
	<davem@...emloft.net>, "'Eric Dumazet'" <edumazet@...gle.com>,
	<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>, "'Dong ha Kang'"
	<dongha7.kang@...sung.com>
Subject: RE: [PATCH] netlink: introduce netlink poll to resolve fast return
 issue



On Tuesday, Nov 7, 2023 at 08:48 Jakub Kicinski wrote:
> Why does the wake up happen in the first place?
> I don't see anything special in the netlink code, so I'm assuming it's
> because datagram_poll() returns EPOLLERR.
> 
> The man page says:
> 
>        EPOLLERR
>               Error condition happened on the associated file
>               descriptor.  This event is also reported for the write end
>               of a pipe when the read end has been closed.
> 
>               epoll_wait(2) will always report for this event; it is not
>               necessary to set it in events when calling epoll_ctl().
> 
> To me that sounds like EPOLLERR is always implicitly enabled, and should
> be handled by the application. IOW it's an pure application bug.
> 
> Are you aware of any precedent for sockets adding in EPOLLOUT when
> EPOLLERR is set?

In my case, the first wake-up was by POLLIN, not POLLERR.
Please consider the below scenario.

------------CPU1 (kernel)----------  --------------CPU2 (app)--------------
...
a driver delivers uevent.              poll was waiting for schedule.
a driver delivers uevent.
a driver delivers uevent.
...
1) netlink_broadcast_deliver fails.
(sk_rmem_alloc > sk_rcvbuf)
                                            getting schedule and poll
returns,
                                            and the app calls recv.
                                            (rcv queue is empied)
                                            2)

netlink_overrun is called.
(NETLINK_S_CONGESTED flag is set,
ENOBUF is written in sk_err and,
wake up poll.)
                                            finishing its job and call poll
again.
                                            poll returns POLLERR.

                                            (the app doesn't have POLLERR
handler,)
                                            it calls poll, but getting
POLLERR.
                                            it calls poll, but getting
POLLERR.
                                            it calls poll, but getting
POLLERR.
                                            ...
									 

Interestingly, in this issue, even though netlink overrun frequently 
happened and caused POLLERRs, the user was managing it well through 
POLLIN and 'recv' function without a specific POLLERR handler. 
However, in the current situation, rcv queue is already empty and 
NETLINK_S_CONGESTED flag prevents any more incoming packets. This makes 
it impossible for the user to call 'recv'.

This "congested" situation is a bit ambiguous. The queue is empty, yet 
'congested' remains. This means kernel can no longer deliver uevents 
despite the empty queue, and it lead to the persistent 'congested' status.

The reason for the difference in netlink lies in the NETLINK_S_CONGESTED 
flag. If it were UDP, upon seeing the empty queue, it might have kept 
pushing the received packets into the queue (making possible to call 
'recv').

BRs,
JE Park.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ