lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 20 Jan 2019 12:20:06 +0100
From:   Jiri Pirko <>
To:     Eran Ben Elisha <>
Cc:, Jiri Pirko <>,
        "David S. Miller" <>,
        Ariel Almog <>,
        Aya Levin <>,
        Moshe Shemesh <>
Subject: Re: [PATCH net-next v2 01/11] devlink: Add health buffer support

Thu, Jan 17, 2019 at 10:59:10PM CET, wrote:
>Devlink health buffer is a mechanism to pass descriptors between drivers
>and devlink. The API allows the driver to add objects, object pair,
>value array (nested attributes), value and name.
>Driver can use this API to fill the buffers in a format which can be
>translated by the devlink to the netlink message.
>In order to fulfill it, an internal buffer descriptor is defined. This
>will hold the data and metadata per each attribute and by used to pass
>actual commands to the netlink.
>This mechanism will be later used in devlink health for dump and diagnose
>data store by the drivers.
>Signed-off-by: Eran Ben Elisha <>
>Reviewed-by: Moshe Shemesh <>
> include/net/devlink.h        |  76 ++++++
> include/uapi/linux/devlink.h |   8 +
> net/core/devlink.c           | 501 +++++++++++++++++++++++++++++++++++
> 3 files changed, 585 insertions(+)


>+static int
>+devlink_health_buffer_snd(struct genl_info *info,
>+			  enum devlink_command cmd, int flags,
>+			  struct devlink_health_buffer **buffers_array,
>+			  u64 num_of_buffers)
>+	struct sk_buff *skb;
>+	struct nlmsghdr *nlh;
>+	void *hdr;
>+	int err;
>+	u64 i;
>+	for (i = 0; i < num_of_buffers; i++) {
>+		/* Skip buffer if driver did not fill it up with any data */
>+		if (!buffers_array[i]->offset)
>+			continue;
>+		skb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
>+		if (!skb)
>+			return -ENOMEM;
>+		hdr = genlmsg_put(skb, info->snd_portid, info->snd_seq,
>+				  &devlink_nl_family, NLM_F_MULTI, cmd);
>+		if (!hdr)
>+			goto nla_put_failure;
>+		err = devlink_health_buffer_prepare_skb(skb, buffers_array[i]);
>+		if (err)
>+			goto nla_put_failure;
>+		genlmsg_end(skb, hdr);
>+		err = genlmsg_reply(skb, info);
>+		if (err)
>+			return err;
>+	}

So you have an array of "buffers". I don't see a need for it. Mapping
this "buffer" 1:1 to netlink skb leads to lots of skbs for info
that could be send in one or a few skbs.

The API to the driver should be different. The driver should not care
about any "buffer" or size of it (in bytes) or how many of them should
be. The driver should just construct a "message". The helpers should be
similar to what you have, but the arg should be say "struct devlink_msg".
It is not really anything special to "health". It is basically json-like
formatted message.

So the driver should use the API to open and close objects, to fill
values etc. Devlink should take care of allocation of needed
buffer/buffers/structs/objects on fly. It could be one linked list of
object for all it matters. No byte buffer needed.

Then, when devlink needs to send netlink skb, it should take this
"struct devlink msg" and translate it to one or many skbs.

Basically the whole API to the driver is wrong, I think it would be
much easier to revert, redo and reapply.

>+	skb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
>+	if (!skb)
>+		return -ENOMEM;
>+	nlh = nlmsg_put(skb, info->snd_portid, info->snd_seq,
>+			NLMSG_DONE, 0, flags | NLM_F_MULTI);
>+	err = genlmsg_reply(skb, info);
>+	if (err)
>+		return err;
>+	return 0;
>+	err = -EIO;
>+	nlmsg_free(skb);
>+	return err;
> static const struct nla_policy devlink_nl_policy[DEVLINK_ATTR_MAX + 1] = {

Powered by blists - more mailing lists