lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9e0f5d24-968c-4356-a243-62f972a17570@bytedance.com>
Date: Mon, 13 May 2024 12:47:20 -0700
From: Zijian Zhang <zijianzhang@...edance.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>, netdev@...r.kernel.org
Cc: edumazet@...gle.com, cong.wang@...edance.com, xiaochun.lu@...edance.com
Subject: Re: [External] Re: [PATCH net-next v3 2/3] sock: add MSG_ZEROCOPY
 notification mechanism based on msg_control

On 5/12/24 5:58 PM, Willem de Bruijn wrote:
> zijianzhang@ wrote:
>> From: Zijian Zhang <zijianzhang@...edance.com>
>>
>> The MSG_ZEROCOPY flag enables copy avoidance for socket send calls.
>> However, zerocopy is not a free lunch. Apart from the management of user
>> pages, the combination of poll + recvmsg to receive notifications incurs
>> unignorable overhead in the applications. The overhead of such sometimes
>> might be more than the CPU savings from zerocopy. We try to solve this
>> problem with a new notification mechanism based on msgcontrol.
>> This new mechanism aims to reduce the overhead associated with receiving
>> notifications by embedding them directly into user arguments passed with
>> each sendmsg control message. By doing so, we can significantly reduce
>> the complexity and overhead for managing notifications. In an ideal
>> pattern, the user will keep calling sendmsg with SCM_ZC_NOTIFICATION
>> msg_control, and the notification will be delivered as soon as possible.
>>
>> Signed-off-by: Zijian Zhang <zijianzhang@...edance.com>
>> Signed-off-by: Xiaochun Lu <xiaochun.lu@...edance.com>
> 
>> +#include <linux/types.h>
>> +
>>   /*
>>    * Desired design of maximum size and alignment (see RFC2553)
>>    */
>> @@ -35,4 +37,12 @@ struct __kernel_sockaddr_storage {
>>   #define SOCK_TXREHASH_DISABLED	0
>>   #define SOCK_TXREHASH_ENABLED	1
>>   
>> +#define SOCK_ZC_INFO_MAX 128
>> +
>> +struct zc_info_elem {
>> +	__u32 lo;
>> +	__u32 hi;
>> +	__u8 zerocopy;
>> +};
>> +
>>   #endif /* _UAPI_LINUX_SOCKET_H */
>> diff --git a/net/core/sock.c b/net/core/sock.c
>> index 8d6e638b5426..15da609be026 100644
>> --- a/net/core/sock.c
>> +++ b/net/core/sock.c
>> @@ -2842,6 +2842,74 @@ int __sock_cmsg_send(struct sock *sk, struct cmsghdr *cmsg,
>>   	case SCM_RIGHTS:
>>   	case SCM_CREDENTIALS:
>>   		break;
>> +	case SCM_ZC_NOTIFICATION: {
>> +		int ret, i = 0;
>> +		int cmsg_data_len, zc_info_elem_num;
>> +		void __user	*usr_addr;
>> +		struct zc_info_elem zc_info_kern[SOCK_ZC_INFO_MAX];
>> +		unsigned long flags;
>> +		struct sk_buff_head *q, local_q;
>> +		struct sk_buff *skb, *tmp;
>> +		struct sock_exterr_skb *serr;
> 
> minor: reverse xmas tree
> 

Ack.

>> +
>> +		if (!sock_flag(sk, SOCK_ZEROCOPY) || sk->sk_family == PF_RDS)
>> +			return -EINVAL;
> 
> Is this mechanism supported for PF_RDS?
> The next patch fails on PF_RDS + '-n'
> 

Nice catch! This mechanism does not support PF_RDS, I will update the
selftest code.

>> +
>> +		cmsg_data_len = cmsg->cmsg_len - sizeof(struct cmsghdr);
>> +		if (cmsg_data_len % sizeof(struct zc_info_elem))
>> +			return -EINVAL;
>> +
>> +		zc_info_elem_num = cmsg_data_len / sizeof(struct zc_info_elem);
>> +		if (!zc_info_elem_num || zc_info_elem_num > SOCK_ZC_INFO_MAX)
>> +			return -EINVAL;
>> +
>> +		if (in_compat_syscall())
>> +			usr_addr = compat_ptr(*(compat_uptr_t *)CMSG_DATA(cmsg));
>> +		else
>> +			usr_addr = (void __user *)*(void **)CMSG_DATA(cmsg);
> 
> The main design issue with this series is this indirection, rather
> than passing the array of notifications as cmsg.
> 
> This trick circumvents having to deal with compat issues and having to
> figure out copy_to_user in ____sys_sendmsg (as msg_control is an
> in-kernel copy).
> 
> This is quite hacky, from an API design PoV.
> 
> As is passing a pointer, but expecting msg_controllen to hold the
> length not of the pointer, but of the pointed to user buffer.
> 
> I had also hoped for more significant savings. Especially with the
> higher syscall overhead due to meltdown and spectre mitigations vs
> when MSG_ZEROCOPY was introduced and I last tried this optimization.
>
Thanks for the summary, totally agree! It's a hard choice to design the
API like this.

>> +		if (!access_ok(usr_addr, cmsg_data_len))
>> +			return -EFAULT;
>> +
>> +		q = &sk->sk_error_queue;
>> +		skb_queue_head_init(&local_q);
>> +		spin_lock_irqsave(&q->lock, flags);
>> +		skb = skb_peek(q);
>> +		while (skb && i < zc_info_elem_num) {
>> +			struct sk_buff *skb_next = skb_peek_next(skb, q);
>> +
>> +			serr = SKB_EXT_ERR(skb);
>> +			if (serr->ee.ee_errno == 0 &&
>> +			    serr->ee.ee_origin == SO_EE_ORIGIN_ZEROCOPY) {
>> +				zc_info_kern[i].hi = serr->ee.ee_data;
>> +				zc_info_kern[i].lo = serr->ee.ee_info;
>> +				zc_info_kern[i].zerocopy = !(serr->ee.ee_code
>> +								& SO_EE_CODE_ZEROCOPY_COPIED);
>> +				__skb_unlink(skb, q);
>> +				__skb_queue_tail(&local_q, skb);
>> +				i++;
>> +			}
>> +			skb = skb_next;
>> +		}
>> +		spin_unlock_irqrestore(&q->lock, flags);
>> +
>> +		ret = copy_to_user(usr_addr,
>> +				   zc_info_kern,
>> +					i * sizeof(struct zc_info_elem));
>> +
>> +		if (unlikely(ret)) {
>> +			spin_lock_irqsave(&q->lock, flags);
>> +			skb_queue_reverse_walk_safe(&local_q, skb, tmp) {
>> +				__skb_unlink(skb, &local_q);
>> +				__skb_queue_head(q, skb);
>> +			}
> 
> Can just list_splice_init?
> 

Ack.

>> +			spin_unlock_irqrestore(&q->lock, flags);
>> +			return -EFAULT;
>> +		}
>> +
>> +		while ((skb = __skb_dequeue(&local_q)))
>> +			consume_skb(skb);
>> +		break;
>> +	}
>>   	default:
>>   		return -EINVAL;
>>   	}
>> -- 
>> 2.20.1
>>
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ