[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+4-OP8dvmikt3-=HVou+=z00ijF6JoKDM=krQcGW2p8Q@mail.gmail.com>
Date: Wed, 22 Nov 2023 11:27:22 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, netdev@...r.kernel.org, pabeni@...hat.com,
almasrymina@...gle.com, hawk@...nel.org, ilias.apalodimas@...aro.org,
dsahern@...il.com, dtatulea@...dia.com, willemb@...gle.com
Subject: Re: [PATCH net-next v3 11/13] net: page_pool: expose page pool stats
via netlink
On Wed, Nov 22, 2023 at 4:44 AM Jakub Kicinski <kuba@...nel.org> wrote:
>
> Dump the stats into netlink. More clever approaches
> like dumping the stats per-CPU for each CPU individually
> to see where the packets get consumed can be implemented
> in the future.
>
> A trimmed example from a real (but recently booted system):
>
> $ ./cli.py --no-schema --spec netlink/specs/netdev.yaml \
> --dump page-pool-stats-get
> [{'info': {'id': 19, 'ifindex': 2},
> 'alloc-empty': 48,
> 'alloc-fast': 3024,
> 'alloc-refill': 0,
> 'alloc-slow': 48,
> 'alloc-slow-high-order': 0,
> 'alloc-waive': 0,
> 'recycle-cache-full': 0,
> 'recycle-cached': 0,
> 'recycle-released-refcnt': 0,
> 'recycle-ring': 0,
> 'recycle-ring-full': 0},
> {'info': {'id': 18, 'ifindex': 2},
> 'alloc-empty': 66,
> 'alloc-fast': 11811,
> 'alloc-refill': 35,
> 'alloc-slow': 66,
> 'alloc-slow-high-order': 0,
> 'alloc-waive': 0,
> 'recycle-cache-full': 1145,
> 'recycle-cached': 6541,
> 'recycle-released-refcnt': 0,
> 'recycle-ring': 1275,
> 'recycle-ring-full': 0},
> {'info': {'id': 17, 'ifindex': 2},
> 'alloc-empty': 73,
> 'alloc-fast': 62099,
> 'alloc-refill': 413,
> ...
>
> Signed-off-by: Jakub Kicinski <kuba@...nel.org>
> ---
> Documentation/netlink/specs/netdev.yaml | 78 ++++++++++++++++++
> Documentation/networking/page_pool.rst | 10 ++-
> include/net/page_pool/helpers.h | 8 +-
> include/uapi/linux/netdev.h | 19 +++++
> net/core/netdev-genl-gen.c | 32 ++++++++
> net/core/netdev-genl-gen.h | 7 ++
> net/core/page_pool.c | 2 +-
> net/core/page_pool_user.c | 103 ++++++++++++++++++++++++
> 8 files changed, 250 insertions(+), 9 deletions(-)
>
> diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
> index 695e0e4e0d8b..77d991738a17 100644
> --- a/Documentation/netlink/specs/netdev.yaml
> +++ b/Documentation/netlink/specs/netdev.yaml
> @@ -137,6 +137,59 @@ name: netdev
> "re-attached", they are just waiting to disappear.
> Attribute is absent if Page Pool has not been detached, and
> can still be used to allocate new memory.
> + -
> + name: page-pool-info
> + subset-of: page-pool
> + attributes:
> + -
> + name: id
> + -
> + name: ifindex
> + -
> + name: page-pool-stats
> + doc: |
> + Page pool statistics, see docs for struct page_pool_stats
> + for information about individual statistics.
> + attributes:
> + -
> + name: info
> + doc: Page pool identifying information.
> + type: nest
> + nested-attributes: page-pool-info
> + -
> + name: alloc-fast
> + type: uint
> + value: 8 # reserve some attr ids in case we need more metadata later
> + -
> + name: alloc-slow
> + type: uint
Same remark than before, all these fields are u64
> + -
> + name: alloc-slow-high-order
> + type: uint
> + -
> + name: alloc-empty
> + type: uint
> + -
> + name: alloc-refill
> + type: uint
> + -
> + name: alloc-waive
> + type: uint
> + -
> + name: recycle-cached
> + type: uint
> + -
> + name: recycle-cache-full
> + type: uint
> + -
> + name: recycle-ring
> + type: uint
> + -
> + name: recycle-ring-full
> + type: uint
> + -
> + name: recycle-released-refcnt
> + type: uint
>
> operations:
> list:
> @@ -210,6 +263,31 @@ name: netdev
> notify: page-pool-get
> mcgrp: page-pool
> config-cond: page-pool
> +
> + if (nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_ALLOC_FAST,
> + stats.alloc_stats.fast) ||
> + nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_ALLOC_SLOW,
> + stats.alloc_stats.slow) ||
> + nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_ALLOC_SLOW_HIGH_ORDER,
> + stats.alloc_stats.slow_high_order) ||
> + nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_ALLOC_EMPTY,
> + stats.alloc_stats.empty) ||
> + nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_ALLOC_REFILL,
> + stats.alloc_stats.refill) ||
> + nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_ALLOC_WAIVE,
> + stats.alloc_stats.waive) ||
> + nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_RECYCLE_CACHED,
> + stats.recycle_stats.cached) ||
> + nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_RECYCLE_CACHE_FULL,
> + stats.recycle_stats.cache_full) ||
> + nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_RECYCLE_RING,
> + stats.recycle_stats.ring) ||
> + nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_RECYCLE_RING_FULL,
> + stats.recycle_stats.ring_full) ||
> + nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_RECYCLE_RELEASED_REFCNT,
> + stats.recycle_stats.released_refcnt))
> + goto err_cancel_msg;
Therefore, we should use nla_put_u64_64bit() ?
Powered by blists - more mailing lists