lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <344810ddca4b0baa9d6844c5100ff091dcda50aa.camel@redhat.com>
Date: Tue, 05 Mar 2024 09:47:04 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Kuniyuki Iwashima <kuniyu@...zon.com>, "David S. Miller"
	 <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski
	 <kuba@...nel.org>
Cc: Kuniyuki Iwashima <kuni1840@...il.com>, netdev@...r.kernel.org
Subject: Re: [PATCH v4 net-next 02/15] af_unix: Allocate struct unix_edge
 for each inflight AF_UNIX fd.

On Thu, 2024-02-29 at 18:22 -0800, Kuniyuki Iwashima wrote:
> As with the previous patch, we preallocate to skb's scm_fp_list an
> array of struct unix_edge in the number of inflight AF_UNIX fds.
> 
> There we just preallocate memory and do not use immediately because
> sendmsg() could fail after this point.  The actual use will be in
> the next patch.
> 
> When we queue skb with inflight edges, we will set the inflight
> socket's unix_sock as unix_edge->predecessor and the receiver's
> unix_sock as successor, and then we will link the edge to the
> inflight socket's unix_vertex.edges.
> 
> Note that we set NULL to cloned scm_fp_list.edges in scm_fp_dup()
> so that MSG_PEEK does not change the shape of the directed graph.
> 
> Signed-off-by: Kuniyuki Iwashima <kuniyu@...zon.com>
> ---
>  include/net/af_unix.h | 6 ++++++
>  include/net/scm.h     | 5 +++++
>  net/core/scm.c        | 2 ++
>  net/unix/garbage.c    | 6 ++++++
>  4 files changed, 19 insertions(+)
> 
> diff --git a/include/net/af_unix.h b/include/net/af_unix.h
> index c270877a5256..55c4abc26a71 100644
> --- a/include/net/af_unix.h
> +++ b/include/net/af_unix.h
> @@ -33,6 +33,12 @@ struct unix_vertex {
>  	unsigned long out_degree;
>  };
>  
> +struct unix_edge {
> +	struct unix_sock *predecessor;
> +	struct unix_sock *successor;
> +	struct list_head vertex_entry;
> +};
> +
>  struct sock *unix_peer_get(struct sock *sk);
>  
>  #define UNIX_HASH_MOD	(256 - 1)
> diff --git a/include/net/scm.h b/include/net/scm.h
> index e34321b6e204..5f5154e5096d 100644
> --- a/include/net/scm.h
> +++ b/include/net/scm.h
> @@ -23,12 +23,17 @@ struct scm_creds {
>  	kgid_t	gid;
>  };
>  
> +#ifdef CONFIG_UNIX
> +struct unix_edge;
> +#endif
> +
>  struct scm_fp_list {
>  	short			count;
>  	short			count_unix;
>  	short			max;
>  #ifdef CONFIG_UNIX
>  	struct list_head	vertices;
> +	struct unix_edge	*edges;
>  #endif
>  	struct user_struct	*user;
>  	struct file		*fp[SCM_MAX_FD];
> diff --git a/net/core/scm.c b/net/core/scm.c
> index 87dfec1c3378..1bcc8a2d65e3 100644
> --- a/net/core/scm.c
> +++ b/net/core/scm.c
> @@ -90,6 +90,7 @@ static int scm_fp_copy(struct cmsghdr *cmsg, struct scm_fp_list **fplp)
>  		fpl->max = SCM_MAX_FD;
>  		fpl->user = NULL;
>  #if IS_ENABLED(CONFIG_UNIX)
> +		fpl->edges = NULL;
>  		INIT_LIST_HEAD(&fpl->vertices);
>  #endif
>  	}
> @@ -383,6 +384,7 @@ struct scm_fp_list *scm_fp_dup(struct scm_fp_list *fpl)
>  		new_fpl->max = new_fpl->count;
>  		new_fpl->user = get_uid(fpl->user);
>  #if IS_ENABLED(CONFIG_UNIX)
> +		new_fpl->edges = NULL;
>  		INIT_LIST_HEAD(&new_fpl->vertices);
>  #endif
>  	}
> diff --git a/net/unix/garbage.c b/net/unix/garbage.c
> index 75bdf66b81df..f31917683288 100644
> --- a/net/unix/garbage.c
> +++ b/net/unix/garbage.c
> @@ -127,6 +127,11 @@ int unix_prepare_fpl(struct scm_fp_list *fpl)
>  		list_add(&vertex->entry, &fpl->vertices);
>  	}
>  
> +	fpl->edges = kvmalloc_array(fpl->count_unix, sizeof(*fpl->edges),
> +				    GFP_KERNEL_ACCOUNT);

If I read correctly, the total amount of additionally memory used will
be proportional to vertices num + edges num. Can you please provide a
the real figures for some reasonable unix fd numbers (say a small
table), to verify this memory usage is reasonable?

Thanks,

Paolo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ