[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AE90C24D6B3A694183C094C60CF0A2F6026B6F4D@saturn3.aculab.com>
Date: Fri, 15 Jun 2012 09:35:29 +0100
From: "David Laight" <David.Laight@...LAB.COM>
To: "Li Yu" <raise.sail@...il.com>,
"Linux Netdev List" <netdev@...r.kernel.org>
Cc: "Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>,
<davidel@...ilserver.org>
Subject: RE: [RFC] Introduce to batch variants of accept() and epoll_ctl() syscall
> We encounter a performance problem in a large scale computer
> cluster, which needs to handle a lot of incoming concurrent TCP
> connection requests.
>
> The top shows the kernel is most cpu hog, the testing is simple,
> just a accept() -> epoll_ctl(ADD) loop, the ratio of cpu util sys% to
> si% is about 2:5.
>
> I also asked some experienced webserver/proxy developers in my team
> for suggestions, it seem that behavior of many userland
> programs already
> called accept() multiple times after it is waked up by
> epoll_wait(). And the common action is adding the fd that accept()
> return into epoll interface by epoll_ctl() syscall then.
>
> Therefore, I think that we'd better to introduce to batch
> variants of
> accept() and epoll_ctl() syscall, just like sendmmsg() or recvmmsg().
...
Having seen the support added to NetBSD for sendmmsg() and
recvmmsg() (and I'm told the linux code is much the same),
I'm surprised that just cutting out a system call entry/exit
and fd lookup is significant above the rest of the costs
involved in sending a message (which I presume is UDP here).
I'd be even more surprised if it is significant for an
incoming connection.
David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists