[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG_fn=VBAN+JPtqRRacd69DOK9rZ-RMpzn+QDJTsZgQ68sOS=Q@mail.gmail.com>
Date: Mon, 9 Oct 2023 11:45:19 +0200
From: Alexander Potapenko <glider@...gle.com>
To: andrey.konovalov@...ux.dev
Cc: Marco Elver <elver@...gle.com>,
Andrey Konovalov <andreyknvl@...il.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>, kasan-dev@...glegroups.com,
Evgenii Stepanov <eugenis@...gle.com>,
Oscar Salvador <osalvador@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Andrey Konovalov <andreyknvl@...gle.com>
Subject: Re: [PATCH v2 11/19] lib/stackdepot: use read/write lock
On Wed, Sep 13, 2023 at 7:16 PM <andrey.konovalov@...ux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@...gle.com>
>
> Currently, stack depot uses the following locking scheme:
>
> 1. Lock-free accesses when looking up a stack record, which allows to
> have multiple users to look up records in parallel;
> 2. Spinlock for protecting the stack depot pools and the hash table
> when adding a new record.
>
> For implementing the eviction of stack traces from stack depot, the
> lock-free approach is not going to work anymore, as we will need to be
> able to also remove records from the hash table.
>
> Convert the spinlock into a read/write lock, and drop the atomic accesses,
> as they are no longer required.
>
> Looking up stack traces is now protected by the read lock and adding new
> records - by the write lock. One of the following patches will add a new
> function for evicting stack records, which will be protected by the write
> lock as well.
>
> With this change, multiple users can still look up records in parallel.
>
> This is preparatory patch for implementing the eviction of stack records
> from the stack depot.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@...gle.com>
Reviewed-by: Alexander Potapenko <glider@...gle.com>
(but see the comment below)
> static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
> {
> union handle_parts parts = { .handle = handle };
> - /*
> - * READ_ONCE pairs with potential concurrent write in
> - * depot_init_pool.
> - */
> - int pools_num_cached = READ_ONCE(pools_num);
> void *pool;
> size_t offset = parts.offset << DEPOT_STACK_ALIGN;
> struct stack_record *stack;
>
> - if (parts.pool_index > pools_num_cached) {
> + lockdep_assert_held(&pool_rwlock);
Shouldn't it be lockdep_assert_held_read()?
Powered by blists - more mailing lists