lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 16 Feb 2007 15:02:53 -0500
From:	Elad Lahav <>
Subject: ip_append_page and the socket send buffer

I wrote a function that is equivalent to udp_sendmsg, but uses 
ip_append_page to attach data to an skb. The function is implemented as 

1. Allocate a page and copy the given data to that page
2. Set up routing and cork the socket
3. Call ip_append_data to create an initial skb (with data length set to 0)
4. Call ip_append_page with the allocated page
5. Call udp_push_pending_frames to send the packet

The function works correctly. Packets are generated and sent as 
expected: this was verified by looking at the packet contents on the 
receiving machine.
However, under load, there is a significant difference in the behaviour 
of udp_sendmsg, compared with my function. The problem is that the 
socket send buffer (wmem_alloc) quickly grows beyond its upper limit 
(which is 131071 by default). This results in numerous failures of 
ip_append_data with EAGAIN, degrading performance considerably.
udp_sendmsg, on the other hand, keeps wmem_alloc in a much smaller range 
under the same load.

Two notes:
1. Modifying the upper limit to 524287 solved the problem completely 
(regardless of the load)
2. The same thing happens with multiple calls to ip_append_data (e.g., 
if I want to copy the data in two sections), so it is not a problem with 
ip_append_page. This leads me o believe that the problem lies with 
Scatter/Gather I/O.

Any thoughts?

To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to
More majordomo info at

Powered by blists - more mailing lists