[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1217512907.8157.91.camel@twins>
Date: Thu, 31 Jul 2008 16:01:47 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: John Kacur <jkacur@...il.com>
Cc: Sebastien Dugue <sebastien.dugue@...l.net>,
Chirag Jog <chirag@...ux.vnet.ibm.com>,
J?rgen Mell <j.mell@...nline.de>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
rt-users <linux-rt-users@...r.kernel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Clark Williams <williams@...hat.com>,
Josh Triplett <josht@...ux.vnet.ibm.com>,
"Timothy R. Chavez" <tim.chavez@...ux.vnet.ibm.com>
Subject: Re: [PATCH] Fix Bug messages
On Thu, 2008-07-31 at 15:49 +0200, John Kacur wrote:
> Signed-off-by: John Kacur <jkacur@...il.com>
> Index: linux-2.6.26-rt1/net/core/sock.c
> ===================================================================
> --- linux-2.6.26-rt1.orig/net/core/sock.c
> +++ linux-2.6.26-rt1/net/core/sock.c
> @@ -1986,11 +1986,12 @@ static __init int net_inuse_init(void)
>
> core_initcall(net_inuse_init);
> #else
> -static DEFINE_PER_CPU(struct prot_inuse, prot_inuse);
> +static DEFINE_PER_CPU_LOCKED(struct prot_inuse, prot_inuse);
>
> void sock_prot_inuse_add(struct net *net, struct proto *prot, int val)
> {
> - __get_cpu_var(prot_inuse).val[prot->inuse_idx] += val;
> + int cpu = 0;
> + __get_cpu_var_locked(prot_inuse, cpu).val[prot->inuse_idx] += val;
> }
> EXPORT_SYMBOL_GPL(sock_prot_inuse_add);
>
> @@ -2000,7 +2001,7 @@ int sock_prot_inuse_get(struct net *net,
> int res = 0;
>
> for_each_possible_cpu(cpu)
> - res += per_cpu(prot_inuse, cpu).val[idx];
> + res += per_cpu_var_locked(prot_inuse, cpu).val[idx];
>
> return res >= 0 ? res : 0;
> }
This doesn't look good. You declare it as a PER_CPU_LOCKED, but then
never use the extra lock to synchronize data.
Given that sock_proc_inuse_get() is a racy read anyway, the 'right' fix
would be to do something like:
diff --git a/net/core/sock.c b/net/core/sock.c
index 91f8bbc..5a8ace4 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1941,8 +1941,9 @@ static DECLARE_BITMAP(proto_inuse_idx, PROTO_INUSE_NR);
#ifdef CONFIG_NET_NS
void sock_prot_inuse_add(struct net *net, struct proto *prot, int val)
{
- int cpu = smp_processor_id();
+ int cpu = get_cpu();
per_cpu_ptr(net->core.inuse, cpu)->val[prot->inuse_idx] += val;
+ put_cpu();
}
EXPORT_SYMBOL_GPL(sock_prot_inuse_add);
@@ -1988,7 +1989,9 @@ static DEFINE_PER_CPU(struct prot_inuse, prot_inuse);
void sock_prot_inuse_add(struct net *net, struct proto *prot, int val)
{
- __get_cpu_var(prot_inuse).val[prot->inuse_idx] += val;
+ int cpu = get_cpu();
+ per_cpu(prot_inuse, cpu).val[prot->inuse_idx] += val;
+ put_cpu();
}
EXPORT_SYMBOL_GPL(sock_prot_inuse_add);
This disables preemption, but only for a very short time - so it doesn't
hurt the preempt-latency.
The alternative is to take a lock, do the inc, and drop the lock again,
which is much more expensive.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists