lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 24 Nov 2022 18:33:27 +0100
From:   Alexander Lobakin <alexandr.lobakin@...el.com>
To:     wangchuanlei <wangchuanlei@...pur.com>
Cc:     Alexander Lobakin <alexandr.lobakin@...el.com>, pabeni@...hat.com,
        echaudro@...hat.com, pshelar@....org, davem@...emloft.net,
        edumazet@...gle.com, kuba@...nel.org, wangpeihui@...pur.com,
        netdev@...r.kernel.org, dev@...nvswitch.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] [openvswitch v4] openvswitch: Add support to count upcall packets

From: wangchuanlei <wangchuanlei@...pur.com>
Date: Wed, 23 Nov 2022 21:24:16 -0500

> Hi,
>     Thank you for review! I will give a new verson of patch based on your comments,
> and i give a explanation on every comments from you, please see below!

Oh, just noticed, the subject prefix [openvswitch] is not correct,
please use [PATCH net-next v5] next time.

> 
> Best reagrds!
> wangchuanlei

[...]

> > +		const struct vport *p = OVS_CB(skb)->input_vport;
> > +		struct vport_upcall_stats_percpu *vport_stats;
> > +
> > +		vport_stats = this_cpu_ptr(p->vport_upcall_stats_percpu);
> 
> Why make a separate structure? You can just expand dp_stats_percpu, this function would then be just a couple lines in ovs_dp_upcall().
> -- emm, beacause of this statistics based on vport, so new structure should insert to "struct vport"

Ah, my bad. Didn't notice that :')

> 
> 
> > +		u64_stats_update_begin(&vport_stats->syncp);
> > +		if (upcall_success)
> > +			u64_stats_inc(&vport_stats->n_upcall_success);
> > +		else
> > +			u64_stats_inc(&vport_stats->n_upcall_fail);
> > +		u64_stats_update_end(&vport_stats->syncp);
> > +	}
> > +}
> > +
> >  void ovs_dp_detach_port(struct vport *p)  {
> >  	ASSERT_OVSL();
> > @@ -216,6 +235,9 @@ void ovs_dp_detach_port(struct vport *p)
> >  	/* First drop references to device. */
> >  	hlist_del_rcu(&p->dp_hash_node);
> >  
> > +	/* Free percpu memory */
> > +	free_percpu(p->vport_upcall_stats_percpu);
> > +
> >  	/* Then destroy it. */
> >  	ovs_vport_del(p);
> >  }
> > @@ -305,6 +327,8 @@ int ovs_dp_upcall(struct datapath *dp, struct sk_buff *skb,
> >  		err = queue_userspace_packet(dp, skb, key, upcall_info, cutlen);
> >  	else
> >  		err = queue_gso_packets(dp, skb, key, upcall_info, cutlen);
> > +
> > +	ovs_vport_upcalls(skb, upcall_info, !err);
> >  	if (err)
> >  		goto err;
> 
> Also, as you may see, your ::upcall_fail counter will be always exactly the same as stats->n_lost. So there's no point introducing a new one.
> However, you can expand the structure dp_stats_percpu and add a new field there which would store the number of successfull upcalls.
> ...but I don't see a reason for this to be honest. From my PoV, it's better to count the number of successfully processed packets at the end of queue_userspace_packet() right before the 'out:'
> label[0]. But please make sure then you don't duplicate some other counter (I'm not deep into OvS, so can't say for sure if there's anything similar to what you want).
> --in ovs , as stats->n_lost only count the sum of packets of all ports, not on individal port , so expand the structure dp_stats_percpu may be not suitable
> --and count upcall failed packets is useful beacuse no all of upcall packets are successfully sent。

Yes, I see now, thanks for the explanation! I think it's good idea
in general to introduce OvS per-vport stats. There are some, but
they're stored in net_device::dev_stats, which I'm not a fan of :D

> 
> >  
> > @@ -1825,6 +1849,13 @@ static int ovs_dp_cmd_new(struct sk_buff *skb, struct genl_info *info)
> >  		goto err_destroy_portids;
> >  	}
> >  
> > +	vport->vport_upcall_stats_percpu =
> 
> This can be at least twice shorter, e.g. 'upcall_stats'. Don't try to describe every detail in symbol names.
> --yes!
> > +				netdev_alloc_pcpu_stats(struct vport_upcall_stats_percpu);
> > +	if (!vport->vport_upcall_stats_percpu) {
> > +		err = -ENOMEM;
> > +		goto err_destroy_upcall_stats;
> 
> I know you followed the previous label logics, but you actually aren't destroying the stats under this label. Here you should have `goto err_destroy_portids` as that's what you're actually doing on that error path.
> --here is just keep format of code, and has no influence on function

Correct, so you can use the already existing label here.

> 
> > +	}
> > +
> >  	err = ovs_dp_cmd_fill_info(dp, reply, info->snd_portid,
> >  				   info->snd_seq, 0, OVS_DP_CMD_NEW);
> >  	BUG_ON(err < 0);
> 
> [...]
> 
> > @@ -2278,6 +2321,14 @@ static int ovs_vport_cmd_new(struct sk_buff *skb, struct genl_info *info)
> >  		goto exit_unlock_free;
> >  	}
> >  
> > +	vport->vport_upcall_stats_percpu =
> > +		netdev_alloc_pcpu_stats(struct vport_upcall_stats_percpu);
> > +
> > +	if (!vport->vport_upcall_stats_percpu) {
> > +		err = -ENOMEM;
> > +		goto exit_unlock_free;
> > +	}
> 
> Why do you allocate them twice?
> -- here is in different code segment on in vport_cmd_new , the other is in dp_cmd_new, they are has no collisions

+ (resolved)

> 
> > +
> >  	err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info),
> >  				      info->snd_portid, info->snd_seq, 0,
> >  				      OVS_VPORT_CMD_NEW, GFP_KERNEL);

[...]

> > --
> > 2.27.0
> 
> [0] https://elixir.bootlin.com/linux/v6.1-rc6/source/net/openvswitch/datapath.c#L557
> 
> Thanks,
> Olek

Thanks,
Olek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ