lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171228162314.GA1983@nanopsycho>
Date:   Thu, 28 Dec 2017 17:23:14 +0100
From:   Jiri Pirko <jiri@...nulli.us>
To:     David Ahern <dsa@...ulusnetworks.com>
Cc:     Yuval Mintz <yuvalm@...lanox.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "davem@...emloft.net" <davem@...emloft.net>,
        Arkadi Sharshevsky <arkadis@...lanox.com>,
        mlxsw <mlxsw@...lanox.com>, "andrew@...n.ch" <andrew@...n.ch>,
        "vivien.didelot@...oirfairelinux.com" 
        <vivien.didelot@...oirfairelinux.com>,
        "f.fainelli@...il.com" <f.fainelli@...il.com>,
        "michael.chan@...adcom.com" <michael.chan@...adcom.com>,
        "ganeshgr@...lsio.com" <ganeshgr@...lsio.com>,
        Saeed Mahameed <saeedm@...lanox.com>,
        Matan Barak <matanb@...lanox.com>,
        Leon Romanovsky <leonro@...lanox.com>,
        Ido Schimmel <idosch@...lanox.com>,
        "jakub.kicinski@...ronome.com" <jakub.kicinski@...ronome.com>,
        "ast@...nel.org" <ast@...nel.org>,
        "daniel@...earbox.net" <daniel@...earbox.net>,
        "simon.horman@...ronome.com" <simon.horman@...ronome.com>,
        "pieter.jansenvanvuuren@...ronome.com" 
        <pieter.jansenvanvuuren@...ronome.com>,
        "john.hurley@...ronome.com" <john.hurley@...ronome.com>,
        "alexander.h.duyck@...el.com" <alexander.h.duyck@...el.com>,
        "linville@...driver.com" <linville@...driver.com>,
        "gospo@...adcom.com" <gospo@...adcom.com>,
        "steven.lin1@...adcom.com" <steven.lin1@...adcom.com>,
        Or Gerlitz <ogerlitz@...lanox.com>,
        "roopa@...ulusnetworks.com" <roopa@...ulusnetworks.com>,
        Shrijeet Mukherjee <shm@...ulusnetworks.com>
Subject: Re: [patch net-next v2 00/10] Add support for resource abstraction

Thu, Dec 28, 2017 at 05:09:09PM CET, dsa@...ulusnetworks.com wrote:
>On 12/28/17 2:25 AM, Yuval Mintz wrote:
>>>>> Again, I have no objections to kvd, linear, hash, etc terms as they do
>>>>> relate to Mellanox products. But kvd/linear, for example, does correlate
>>>>> to industry standard concepts in some way. My request is that the
>>>>> resource listing guide the user in some way, stating what these
>>>>> resources mean.
>>>>
>>>> So the showed relation to dpipe table would be enougn or you would still
>>>> like to see some description? I don't like the description concept here
>>>> as the relations to dpipe table should tell user exactly what he needs
>>>> to know.
>>>
>>> I believe it is useful to have a 1-line, short description that gives
>>> the user some memory jogger as to what the resource is used for. It does
>>> not have to be an exhaustive list, but the user should not have to do
>>> mental jumping jacks running a bunch of commands to understand the
>>> resources for vendor specific asics.
>> 
>> Perhaps we can simply have devlink utility output the dpipe
>> table[s] associated with the resource when showing the resource?
>> It would contain live information as well as prevent the need for
>> 'mental jumping jacks'.
>> 
>
>My primary contention for this static partitioning is that the proposal
>is not giving the user the information they need to make decisions.
>
>As I mentioned earlier, the resource show command gives this:
>$ devlink resource show pci/0000:03:00.0
>pci/0000:03:00.0:
>  name kvd size 245760 size_valid true
>  resources:
>    name linear size 98304 occ 0
>    name hash_double size 60416
>    name hash_single size 87040
>
>the paths /kvd/linear, /kvd/hash_single and /kvd/hash_double are
>essentially random names (nothing related to industry standard names)

Of course. There is no industry standard for internal ASIC
implementations. This is the same as for dpipe. There is no industry
standard for ASIC pipeline. dpipe visualizes it. This resource patch
visualizes the internal ASIC resources and their mapping to the
individual dpipe tables.


>and the listed sizes are random numbers (no units)[1]. There is nothing
>there to tell a user what they can adjust or why they would want to make
>an adjustment.
>
>
>Looking at 'dpipe table show':
>
>$ devlink dpipe table show pci/0000:03:00.0
>pci/0000:03:00.0:
>  name mlxsw_erif size 1000 counters_enabled false
>  match:
>    type field_exact header mlxsw_meta field erif_port mapping ifindex
>  action:
>    type field_modify header mlxsw_meta field l3_forward
>    type field_modify header mlxsw_meta field l3_drop
>
>  resource_path /kvd/hash_single name mlxsw_host4 size 62
>counters_enabled false
>  match:
>    type field_exact header mlxsw_meta field erif_port mapping ifindex
>    type field_exact header ipv4 field destination ip
>  action:
>    type field_modify header ethernet field destination mac
>
>  resource_path /kvd/hash_double name mlxsw_host6 size 0
>counters_enabled false
>  match:
>    type field_exact header mlxsw_meta field erif_port mapping ifindex
>    type field_exact header ipv6 field destination ip
>  action:
>    type field_modify header ethernet field destination mac
>
>  resource_path /kvd/linear name mlxsw_adj size 0 counters_enabled false
>  match:
>    type field_exact header mlxsw_meta field adj_index
>    type field_exact header mlxsw_meta field adj_size
>    type field_exact header mlxsw_meta field adj_hash_index
>  action:
>    type field_modify header ethernet field destination mac
>    type field_modify header mlxsw_meta field erif_port mapping ifindex
>
>
>So there are 4 tables exported to userspace:
>
>1. mlxsw_erif table which is not in any of the kvd regions (no resource
>path is given) and it has a size of 1000. Does mlxsw_erif mean a rif as
>in Router Interfaces? So the switch supports up to 1000 router interfaces.
>
>2. mlxsw_host4 in /kvd/hash_single with a size of 62. Based on the

Size tells you the actual size. It cannot give you max size. The reason
is simple. The resources are shared among multiple tables. That is
exactly what this resource patch makes visible.


>fields mlxsw_host4 means IPv4 host entries (neighbor entries). Why is
>the size set at 62? Seems really low.
>
>3. mlxsw_host6 in /kvd/hash_double with a size of 0. Based on the fields
>mlxsw_host6 means IPv6 host entries (neighbor entries). The size of 0 is
>concerning. I guess the switch is not configured to do IPv6?
>
>4. mlxsw_adj in /kvd/linear with a size of 0. Based on the fields I am
>going to guess it is an fdb entry????

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ