[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m139biesze.fsf@fess.ebiederm.org>
Date: Fri, 13 Jan 2012 21:46:45 -0800
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Ben Greear <greearb@...delatech.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>,
Francesco Ruggeri <fruggeri@...stanetworks.com>,
netdev@...r.kernel.org, Stephen Hemminger <shemminger@...tta.com>
Subject: Re: Race condition in ipv6 code
Ben Greear <greearb@...delatech.com> writes:
> On 01/12/2012 11:40 PM, Eric W. Biederman wrote:
>
>> So I really think the best solution to avoid the locking craziness is to
>> have a wrapper that gets the value from userspace and calls
>> schedule_work to get another thread to actually process the change. I
>> don't see any problems with writing a helper function for that. The
>> only downside with using schedule_work is that we return to userspace
>> before the change has been fully installed in the kernel. I don't
>> expect that would be a problem but stranger things have happened.
>
> That sounds a bit risky to me. If something sets a value, and then
> queries it, it should always show the proper result for the previous
> calls.
Which is easy to do if you keep two values. One integer
for the userspace control and another integer for the internal
kernel state.
The problem is that we have exactly one integer currently.
> If the queries also went through the the same sched-work queue
> then maybe it would be OK.
We can't want for anything that has to take the rtnl_lock. That would
be the same as taking the rtnl_lock from a locking perspective.
I expect I would use something like:
struct rtnl_protected_knob {
struct work_struct work;
int userspace_value;
int *kernel_var;
void (*func)(int new_value, *kernel_var);
};
userspace_value would be what userspace sees, and kernel_var would be a
pointer to the value that we manipulate in the kernel.
Eric
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists