[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87wqkovvzs.fsf@tassilo.jf.intel.com>
Date: Mon, 04 Nov 2013 10:53:59 -0800
From: Andi Kleen <andi@...stfloor.org>
To: Anatol Pomozov <anatol.pomozov@...il.com>
Cc: LKML <linux-kernel@...r.kernel.org>
Subject: Re: Solving M produces N consumers scalability problem
Anatol Pomozov <anatol.pomozov@...il.com> writes:
>
> One idea is not to use the spin_lock. It is the 'fair spin_lock' that
> has scalability problems
> http://pdos.csail.mit.edu/papers/linux:lock.pdf Maybe lockless
> datastructures can help here?
The standard spin lock is already improved.
But better locks just give you a small advantage, they don't
solve the real scaling problem.
>
> Another idea is avoid global datasctructures but I have a few
> questions here. Let's say we want to use per-CPU lists. But the
> problem is that producers/consumers are not distributed across all
> CPUs. Some CPU might have too many producers, some other might not
> have consumers at all. So we need some kind of migration from hot CPU
> to the cold one. What is the best way to achieve it? Are there any
> examples how to do this? Any other ideas?
per cpu is the standard approach, but usually overkill. Also
requires complex code to drain etc.
Some older patches also use per node, but that works very poorly
these days (nodes are far too big)
One way I like is to simply use a global (allocated) array of queues,
sized by total number of possible cpus (but significantly smaller) and
use the cpu number as a hash into the array.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists