[<prev] [next>] [day] [month] [year] [list]
Message-ID: <202109181447.Nwq0VTzX-lkp@intel.com>
Date: Sat, 18 Sep 2021 14:14:54 +0800
From: kernel test robot <lkp@...el.com>
To: Cong Wang <cong.wang@...edance.com>
Cc: kbuild-all@...ts.01.org, linux-kernel@...r.kernel.org
Subject: [congwang:bpf 5/5] net/ipv4/tcp.c:566:25: error: implicit
declaration of function 'tcp_bpf_poll'; did you mean 'tcp_bpf_rtt'?
tree: https://github.com/congwang/linux.git bpf
head: 5d467183b34e09531688cab9bae950fa6d5d04d3
commit: 5d467183b34e09531688cab9bae950fa6d5d04d3 [5/5] tcp_bpf: poll psock queues too in tcp_poll()
config: ia64-defconfig (attached as .config)
compiler: ia64-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/congwang/linux/commit/5d467183b34e09531688cab9bae950fa6d5d04d3
git remote add congwang https://github.com/congwang/linux.git
git fetch --no-tags congwang bpf
git checkout 5d467183b34e09531688cab9bae950fa6d5d04d3
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=ia64
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@...el.com>
All errors (new ones prefixed by >>):
net/ipv4/tcp.c: In function 'tcp_poll':
>> net/ipv4/tcp.c:566:25: error: implicit declaration of function 'tcp_bpf_poll'; did you mean 'tcp_bpf_rtt'? [-Werror=implicit-function-declaration]
566 | mask |= tcp_bpf_poll(sk);
| ^~~~~~~~~~~~
| tcp_bpf_rtt
cc1: some warnings being treated as errors
vim +566 net/ipv4/tcp.c
494
495 /*
496 * Wait for a TCP event.
497 *
498 * Note that we don't need to lock the socket, as the upper poll layers
499 * take care of normal races (between the test and the event) and we don't
500 * go look at any of the socket buffers directly.
501 */
502 __poll_t tcp_poll(struct file *file, struct socket *sock, poll_table *wait)
503 {
504 __poll_t mask;
505 struct sock *sk = sock->sk;
506 const struct tcp_sock *tp = tcp_sk(sk);
507 int state;
508
509 sock_poll_wait(file, sock, wait);
510
511 state = inet_sk_state_load(sk);
512 if (state == TCP_LISTEN)
513 return inet_csk_listen_poll(sk);
514
515 /* Socket is not locked. We are protected from async events
516 * by poll logic and correct handling of state changes
517 * made by other threads is impossible in any case.
518 */
519
520 mask = 0;
521
522 /*
523 * EPOLLHUP is certainly not done right. But poll() doesn't
524 * have a notion of HUP in just one direction, and for a
525 * socket the read side is more interesting.
526 *
527 * Some poll() documentation says that EPOLLHUP is incompatible
528 * with the EPOLLOUT/POLLWR flags, so somebody should check this
529 * all. But careful, it tends to be safer to return too many
530 * bits than too few, and you can easily break real applications
531 * if you don't tell them that something has hung up!
532 *
533 * Check-me.
534 *
535 * Check number 1. EPOLLHUP is _UNMASKABLE_ event (see UNIX98 and
536 * our fs/select.c). It means that after we received EOF,
537 * poll always returns immediately, making impossible poll() on write()
538 * in state CLOSE_WAIT. One solution is evident --- to set EPOLLHUP
539 * if and only if shutdown has been made in both directions.
540 * Actually, it is interesting to look how Solaris and DUX
541 * solve this dilemma. I would prefer, if EPOLLHUP were maskable,
542 * then we could set it on SND_SHUTDOWN. BTW examples given
543 * in Stevens' books assume exactly this behaviour, it explains
544 * why EPOLLHUP is incompatible with EPOLLOUT. --ANK
545 *
546 * NOTE. Check for TCP_CLOSE is added. The goal is to prevent
547 * blocking on fresh not-connected or disconnected socket. --ANK
548 */
549 if (sk->sk_shutdown == SHUTDOWN_MASK || state == TCP_CLOSE)
550 mask |= EPOLLHUP;
551 if (sk->sk_shutdown & RCV_SHUTDOWN)
552 mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP;
553
554 /* Connected or passive Fast Open socket? */
555 if (state != TCP_SYN_SENT &&
556 (state != TCP_SYN_RECV || rcu_access_pointer(tp->fastopen_rsk))) {
557 int target = sock_rcvlowat(sk, 0, INT_MAX);
558
559 if (READ_ONCE(tp->urg_seq) == READ_ONCE(tp->copied_seq) &&
560 !sock_flag(sk, SOCK_URGINLINE) &&
561 tp->urg_data)
562 target++;
563
564 if (tcp_stream_is_readable(sk, target))
565 mask |= EPOLLIN | EPOLLRDNORM;
> 566 mask |= tcp_bpf_poll(sk);
567
568 if (!(sk->sk_shutdown & SEND_SHUTDOWN)) {
569 if (__sk_stream_is_writeable(sk, 1)) {
570 mask |= EPOLLOUT | EPOLLWRNORM;
571 } else { /* send SIGIO later */
572 sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk);
573 set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
574
575 /* Race breaker. If space is freed after
576 * wspace test but before the flags are set,
577 * IO signal will be lost. Memory barrier
578 * pairs with the input side.
579 */
580 smp_mb__after_atomic();
581 if (__sk_stream_is_writeable(sk, 1))
582 mask |= EPOLLOUT | EPOLLWRNORM;
583 }
584 } else
585 mask |= EPOLLOUT | EPOLLWRNORM;
586
587 if (tp->urg_data & TCP_URG_VALID)
588 mask |= EPOLLPRI;
589 } else if (state == TCP_SYN_SENT && inet_sk(sk)->defer_connect) {
590 /* Active TCP fastopen socket with defer_connect
591 * Return EPOLLOUT so application can call write()
592 * in order for kernel to generate SYN+data
593 */
594 mask |= EPOLLOUT | EPOLLWRNORM;
595 }
596 /* This barrier is coupled with smp_wmb() in tcp_reset() */
597 smp_rmb();
598 if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
599 mask |= EPOLLERR;
600
601 return mask;
602 }
603 EXPORT_SYMBOL(tcp_poll);
604
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Download attachment ".config.gz" of type "application/gzip" (19957 bytes)
Powered by blists - more mailing lists