[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNNoJQoWzODAbc4naq--b+LOfK76TCbx9MpL8+4x9=LTiw@mail.gmail.com>
Date: Tue, 24 Oct 2023 15:13:54 +0200
From: Marco Elver <elver@...gle.com>
To: andrey.konovalov@...ux.dev
Cc: Alexander Potapenko <glider@...gle.com>,
Andrey Konovalov <andreyknvl@...il.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>, kasan-dev@...glegroups.com,
Evgenii Stepanov <eugenis@...gle.com>,
Oscar Salvador <osalvador@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Andrey Konovalov <andreyknvl@...gle.com>
Subject: Re: [PATCH v3 00/19] stackdepot: allow evicting stack traces
On Mon, 23 Oct 2023 at 18:22, <andrey.konovalov@...ux.dev> wrote:
[...]
> ---
>
> Changes v2->v3:
> - Fix null-ptr-deref by using the proper number of entries for
> initializing the stack table when alloc_large_system_hash()
> auto-calculates the number (see patch #12).
> - Keep STACKDEPOT/STACKDEPOT_ALWAYS_INIT Kconfig options not configurable
> by users.
> - Use lockdep_assert_held_read annotation in depot_fetch_stack.
> - WARN_ON invalid flags in stack_depot_save_flags.
> - Moved "../slab.h" include in mm/kasan/report_tags.c in the right patch.
> - Various comment fixes.
>
> Changes v1->v2:
> - Rework API to stack_depot_save_flags(STACK_DEPOT_FLAG_GET) +
> stack_depot_put.
> - Add CONFIG_STACKDEPOT_MAX_FRAMES Kconfig option.
> - Switch stack depot to using list_head's.
> - Assorted minor changes, see the commit message for each path.
>
> Andrey Konovalov (19):
> lib/stackdepot: check disabled flag when fetching
> lib/stackdepot: simplify __stack_depot_save
> lib/stackdepot: drop valid bit from handles
> lib/stackdepot: add depot_fetch_stack helper
> lib/stackdepot: use fixed-sized slots for stack records
1. I know fixed-sized slots are need for eviction to work, but have
you evaluated if this causes some excessive memory waste now? Or is it
negligible?
If it turns out to be a problem, one way out would be to partition the
freelist into stack size classes; e.g. one for each of stack traces of
size 8, 16, 32, 64.
> lib/stackdepot: fix and clean-up atomic annotations
> lib/stackdepot: rework helpers for depot_alloc_stack
> lib/stackdepot: rename next_pool_required to new_pool_required
> lib/stackdepot: store next pool pointer in new_pool
> lib/stackdepot: store free stack records in a freelist
> lib/stackdepot: use read/write lock
2. I still think switching to the percpu_rwsem right away is the right
thing, and not actually a downside. I mentioned this before, but you
promised a follow-up patch, so I trust that this will happen. ;-)
> lib/stackdepot: use list_head for stack record links
> kmsan: use stack_depot_save instead of __stack_depot_save
> lib/stackdepot, kasan: add flags to __stack_depot_save and rename
> lib/stackdepot: add refcount for records
> lib/stackdepot: allow users to evict stack traces
> kasan: remove atomic accesses to stack ring entries
> kasan: check object_size in kasan_complete_mode_report_info
> kasan: use stack_depot_put for tag-based modes
>
> include/linux/stackdepot.h | 59 ++++--
> lib/Kconfig | 10 +
> lib/stackdepot.c | 418 ++++++++++++++++++++++++-------------
> mm/kasan/common.c | 7 +-
> mm/kasan/generic.c | 9 +-
> mm/kasan/kasan.h | 2 +-
> mm/kasan/report_tags.c | 27 +--
> mm/kasan/tags.c | 24 ++-
> mm/kmsan/core.c | 7 +-
> 9 files changed, 365 insertions(+), 198 deletions(-)
Acked-by: Marco Elver <elver@...gle.com>
The series looks good in its current state. However, see my 2
higher-level comments above.
Powered by blists - more mailing lists