lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iK2BjePynZsM7pPMNc9jWJY716k_YT=bZ9wKE5aUhuZ-A@mail.gmail.com>
Date: Wed, 22 Nov 2023 11:21:11 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, netdev@...r.kernel.org, pabeni@...hat.com, 
	almasrymina@...gle.com, hawk@...nel.org, ilias.apalodimas@...aro.org, 
	dsahern@...il.com, dtatulea@...dia.com, willemb@...gle.com
Subject: Re: [PATCH net-next v3 10/13] net: page_pool: report when page pool
 was destroyed

On Wed, Nov 22, 2023 at 4:44 AM Jakub Kicinski <kuba@...nel.org> wrote:
>
> Report when page pool was destroyed. Together with the inflight
> / memory use reporting this can serve as a replacement for the
> warning about leaked page pools we currently print to dmesg.
>
> Example output for a fake leaked page pool using some hacks
> in netdevsim (one "live" pool, and one "leaked" on the same dev):
>
> $ ./cli.py --no-schema --spec netlink/specs/netdev.yaml \
>            --dump page-pool-get
> [{'id': 2, 'ifindex': 3},
>  {'id': 1, 'ifindex': 3, 'destroyed': 133, 'inflight': 1}]
>
> Tested-by: Dragos Tatulea <dtatulea@...dia.com>
> Signed-off-by: Jakub Kicinski <kuba@...nel.org>
> ---
>  Documentation/netlink/specs/netdev.yaml | 13 +++++++++++++
>  include/net/page_pool/types.h           |  1 +
>  include/uapi/linux/netdev.h             |  1 +
>  net/core/page_pool.c                    |  1 +
>  net/core/page_pool_priv.h               |  1 +
>  net/core/page_pool_user.c               | 12 ++++++++++++
>  6 files changed, 29 insertions(+)
>
> diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
> index 85209e19dca9..695e0e4e0d8b 100644
> --- a/Documentation/netlink/specs/netdev.yaml
> +++ b/Documentation/netlink/specs/netdev.yaml
> @@ -125,6 +125,18 @@ name: netdev
>          type: uint
>          doc: |
>            Amount of memory held by inflight pages.
> +      -
> +        name: detach-time
> +        type: uint
> +        doc: |
> +          Seconds in CLOCK_BOOTTIME of when Page Pool was detached by
> +          the driver. Once detached Page Pool can no longer be used to
> +          allocate memory.
> +          Page Pools wait for all the memory allocated from them to be freed
> +          before truly disappearing. "Detached" Page Pools cannot be
> +          "re-attached", they are just waiting to disappear.
> +          Attribute is absent if Page Pool has not been detached, and
> +          can still be used to allocate new memory.
>
>  operations:
>    list:
> @@ -176,6 +188,7 @@ name: netdev
>              - napi-id
>              - inflight
>              - inflight-mem
> +            - detach-time
>        dump:
>          reply: *pp-reply
>        config-cond: page-pool
> diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
> index 7e47d7bb2c1e..ac286ea8ce2d 100644
> --- a/include/net/page_pool/types.h
> +++ b/include/net/page_pool/types.h
> @@ -193,6 +193,7 @@ struct page_pool {
>         /* User-facing fields, protected by page_pools_lock */
>         struct {
>                 struct hlist_node list;
> +               u64 detach_time;
>                 u32 napi_id;
>                 u32 id;
>         } user;
> diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
> index 26ae5bdd3187..756410274120 100644
> --- a/include/uapi/linux/netdev.h
> +++ b/include/uapi/linux/netdev.h
> @@ -70,6 +70,7 @@ enum {
>         NETDEV_A_PAGE_POOL_NAPI_ID,
>         NETDEV_A_PAGE_POOL_INFLIGHT,
>         NETDEV_A_PAGE_POOL_INFLIGHT_MEM,
> +       NETDEV_A_PAGE_POOL_DETACH_TIME,
>
>         __NETDEV_A_PAGE_POOL_MAX,
>         NETDEV_A_PAGE_POOL_MAX = (__NETDEV_A_PAGE_POOL_MAX - 1)
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 566390759294..a821fb5fe054 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -953,6 +953,7 @@ void page_pool_destroy(struct page_pool *pool)
>         if (!page_pool_release(pool))
>                 return;
>
> +       page_pool_detached(pool);
>         pool->defer_start = jiffies;
>         pool->defer_warn  = jiffies + DEFER_WARN_INTERVAL;
>
> diff --git a/net/core/page_pool_priv.h b/net/core/page_pool_priv.h
> index 72fb21ea1ddc..90665d40f1eb 100644
> --- a/net/core/page_pool_priv.h
> +++ b/net/core/page_pool_priv.h
> @@ -6,6 +6,7 @@
>  s32 page_pool_inflight(const struct page_pool *pool, bool strict);
>
>  int page_pool_list(struct page_pool *pool);
> +void page_pool_detached(struct page_pool *pool);
>  void page_pool_unlist(struct page_pool *pool);
>
>  #endif
> diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c
> index d889b347f8f4..f28ad2179f53 100644
> --- a/net/core/page_pool_user.c
> +++ b/net/core/page_pool_user.c
> @@ -134,6 +134,10 @@ page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool,
>             nla_put_uint(rsp, NETDEV_A_PAGE_POOL_INFLIGHT_MEM,
>                          inflight * refsz))
>                 goto err_cancel;
> +       if (pool->user.detach_time &&
> +           nla_put_uint(rsp, NETDEV_A_PAGE_POOL_DETACH_TIME,

You need nla_put_u64_64bit() here

Reviewed-by: Eric Dumazet <edumazet@...gle.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ