lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <546EE7AD.90503@intel.com>
Date:	Thu, 20 Nov 2014 23:20:13 -0800
From:	John Fastabend <john.r.fastabend@...el.com>
To:	Simon Horman <simon.horman@...ronome.com>
CC:	John Fastabend <john.fastabend@...il.com>,
	Jamal Hadi Salim <jhs@...atatu.com>,
	Jiri Pirko <jiri@...nulli.us>, netdev@...r.kernel.org,
	davem@...emloft.net, nhorman@...driver.com, andy@...yhouse.net,
	tgraf@...g.ch, dborkman@...hat.com, ogerlitz@...lanox.com,
	jesse@...ira.com, pshelar@...ira.com, azhou@...ira.com,
	ben@...adent.org.uk, stephen@...workplumber.org,
	jeffrey.t.kirsher@...el.com, vyasevic@...hat.com,
	xiyou.wangcong@...il.com, edumazet@...gle.com, sfeldma@...il.com,
	f.fainelli@...il.com, roopa@...ulusnetworks.com,
	linville@...driver.com, jasowang@...hat.com, ebiederm@...ssion.com,
	nicolas.dichtel@...nd.com, ryazanov.s.a@...il.com,
	buytenh@...tstofly.org, aviadr@...lanox.com, nbd@...nwrt.org,
	alexei.starovoitov@...il.com, Neil.Jerram@...aswitch.com,
	ronye@...lanox.com, alexander.h.duyck@...hat.com,
	john.ronciak@...el.com, mleitner@...hat.com, shrijeet@...il.com,
	gospo@...ulusnetworks.com, bcrl@...ck.org
Subject: Re: [patch net-next v2 00/10] introduce rocker switch driver with
 hardware accelerated datapath api - phase 1: bridge fdb offload

On 11/20/2014 06:01 PM, Simon Horman wrote:
> Hi John,
> 
> On Wed, Nov 12, 2014 at 10:31:06PM -0800, John Fastabend wrote:
>> On 11/12/2014 09:44 PM, Simon Horman wrote:
>>> [snip]
>>>
>>>> Simon, if your feeling adventurous any feedback on the repo link
>>>> would be great. I still need to smash the commit log into something
>>>> coherent though at the moment you can see all the errors and rewrites,
>>>> etc as I made them.
>>>
>>> Hi John,
>>>
>>> here is some preliminary feedback:
>>>
>>> * I notice that the parse graph code isn't present yet.
>>>   I suppose this is a difficult piece that naturally follows many
>>>   other piece. None the less it is possibly the piece of most
>>>   interest to me :-)
>>
>> I can add this over the next few days. Also I wanted to publish some
>> more complex examples on top of rocker switch. The nic drivers are
>> interesting but not as complex as some of the switch devices.
> 
> I see that you have added the header graph, which seems pretty nice
> from my reading so far.

Great. I'm iterating over some other hardware now to be sure it will
work on some more complex configurations. And about ready to start
the rocker implementation.

> 
> It seems to allow for arbitrary connections between instances
> of net_flow_header_node, including I loops I suppose. This seems
> nice and flexible to me.

Right loops could be supported.


> 
> I have a very minor update to contribute which helped me to
> read the code. Please feel free to squash/ignore/...
> 

I like the patch. This sort of cleanup is needed.

> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_pipeline.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_pipeline.h
> index a4818ab..4025a61 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_pipeline.h
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_pipeline.h
> @@ -412,7 +412,7 @@ struct net_flow_jump_table ixgbe_vlan_inner_jump[2] = {
>  		   .type = NET_FLOW_FIELD_REF_ATTR_TYPE_U16,
>  		   .value_u16 = 0x0800,
>  		},
> -		.node = 4,
> +		.node = HEADER_INSTANCE_IP,
>  	},
>  	{
>  		.field = {0},
> @@ -443,7 +443,7 @@ struct net_flow_jump_table ixgbe_ip_jump[2] = {
>  		   .type = NET_FLOW_FIELD_REF_ATTR_TYPE_U8,
>  		   .value_u8 = 0x06,
>  		},
> -		.node = 5,
> +		.node = HEADER_INSTANCE_TCP,
>  	},
>  	{
>  		.field = {0},
> 
>> There is also the table graph layout which I wanted tweak a bit. At
>> the moment I have hardware that can run tables in parallel and some
>> that executes tables in sequence. It might not be clear from the code
>> (why I need the cleanup) but the source id is being used to indicate
>> if the tables are executed in parallel or not.
> 
> Thanks, that was not clear to me.
> 
>>> * Will del and update flows require flows to already exist?
>>>   And similarly, will add flow require flows with the same match to not
>>>   already exist?  If so, the error handling seems tricky of more than one
>>>   flow is to be deleted/updated. IIRC there was some discussion of that
>>>   kind of issue at the (double) round table discussion on the last day of
>>>   LPC14 in Düsseldorf.
>>
>> I would expect del/updates for flows that don't exist should fail.
> 
> Am I right in thinking that del and update flow NDOs may take
> a list of flows? If so I think some consideration needs
> to be made for handling failure of e.g. the last element of
> the list when the previous elements succeeded.

Yes the list of add/deletes are needed for user space to push down
bulk commands. At init time or when management software resets or
likely other cases we get large sets of rules being pushed
down doing this in bulk is much better then one-offing the flow commands.

> 
> I suppose that user-space could dump the flow table if an error and
> adjust its state accordingly. But that seems somewhat onerous.

Right we talked about this briefly @ LPC if I recall correctly. I see
a couple ways to do this. Either we do a best effort and return the
first flow to have an error so userspace can learn where the failure
occurred. Another option is to have the driver roll-back and remove any
flows that were added leaving the driver in the same state it was before
the add flow command started. Or we could support both and let user space
tell us if we should use best effort or required modes.

Any thoughts?

> 
>> I didn't intend to add any checks in the kernel to verify the matches
>> are unique. My opinion on this is that user space shouldn't add new
>> duplicate flows. And if it does hardware resources will be wasted.
> 
> I don't have any strong opinions on that at this time.
> But it does seem reasonable so long as its clear that is the case.

When I get to document this I'll call it out explicitly. I think we
should have something in ./Documentation/networkin/ for this API.

> 
>>> * Should the .node_count value of ixgbe_table_node_l2 be 3?
>>>   ixgbe_table_graph_nodes has three elements but perhaps you
>>>   are intentionally excluding the last element ixgbe_table_node_nil?
>>>
>>
>> Actually I could just drop the node_count at the moment because I've
>> been null terminating the arrays with null items.
>>
>> I should either add a count field to all the structures or null terminate
>> the arrays. For now I mostly null terminate the arrays when I use
>> them. For example matches is null terminates same for actions.
> 
> It looks like you have move towards the null termination option.
> No objections here.
> 

yep, the code looked nicer to me at least with this approach.

Thanks for looking it over,
John
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ