[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YqjCGWmM2cGG1OOF@arm.com>
Date: Tue, 14 Jun 2022 18:15:05 +0100
From: Catalin Marinas <catalin.marinas@....com>
To: Waiman Long <longman@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] mm/kmemleak: Prevent soft lockup in first object
iteration loop of kmemleak_scan()
On Sun, Jun 12, 2022 at 02:33:01PM -0400, Waiman Long wrote:
> @@ -1437,10 +1440,25 @@ static void kmemleak_scan(void)
> #endif
> /* reset the reference count (whiten the object) */
> object->count = 0;
> - if (color_gray(object) && get_object(object))
> + if (color_gray(object) && get_object(object)) {
> list_add_tail(&object->gray_list, &gray_list);
> + gray_list_cnt++;
> + object_pinned = true;
> + }
>
> raw_spin_unlock_irq(&object->lock);
> +
> + /*
> + * With object pinned by a positive reference count, it
> + * won't go away and we can safely release the RCU read
> + * lock and do a cond_resched() to avoid soft lockup every
> + * 64k objects.
> + */
> + if (object_pinned && !(gray_list_cnt & 0xffff)) {
> + rcu_read_unlock();
> + cond_resched();
> + rcu_read_lock();
> + }
I'm not sure this gains much. There should be very few gray objects
initially (those passed to kmemleak_not_leak() for example). The
majority should be white objects.
If we drop the fine-grained object->lock, we could instead take
kmemleak_lock outside the loop with a cond_resched_lock(&kmemleak_lock)
within the loop. I think we can get away with not having an
rcu_read_lock() at all for list traversal with the big lock outside the
loop.
The reason I added it in the first kmemleak incarnation was to defer
kmemleak_object freeing as it was causing a re-entrant call into the
slab allocator. I later went for fine-grained locking and RCU list
traversal but I may have overdone it ;).
--
Catalin
Powered by blists - more mailing lists