[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070816053741.GA32442@gondor.apana.org.au>
Date: Thu, 16 Aug 2007 13:37:41 +0800
From: Herbert Xu <herbert@...dor.apana.org.au>
To: Paul Mackerras <paulus@...ba.org>
Cc: Christoph Lameter <clameter@....com>,
Satyam Sharma <satyam@...radead.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Stefan Richter <stefanr@...6.in-berlin.de>,
Chris Snook <csnook@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-arch@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
netdev@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>,
ak@...e.de, heiko.carstens@...ibm.com, davem@...emloft.net,
schwidefsky@...ibm.com, wensong@...ux-vs.org, horms@...ge.net.au,
wjiang@...ilience.com, cfriesen@...tel.com, zlynx@....org,
rpjday@...dspring.com, jesper.juhl@...il.com,
segher@...nel.crashing.org
Subject: Re: [PATCH 0/24] make atomic_read() behave consistently across all architectures
On Thu, Aug 16, 2007 at 02:34:25PM +1000, Paul Mackerras wrote:
>
> I'm talking about this situation:
>
> CPU 0 comes into __sk_stream_mem_reclaim, reads memory_allocated, but
> then before it can do the store to *memory_pressure, CPUs 1-1023 all
> go through sk_stream_mem_schedule, collectively increase
> memory_allocated to more than sysctl_mem[2] and set *memory_pressure.
> Finally CPU 0 gets to do its store and it sets *memory_pressure back
> to 0, but by this stage memory_allocated is way larger than
> sysctl_mem[2].
It doesn't matter. The memory pressure flag is an *advisory*
flag. If we get it wrong the worst that'll happen is that we'd
waste some time doing work that'll be thrown away.
Please look at the places where it's used before jumping to
conclusions.
> Now, maybe it's the case that it doesn't really matter whether
> *->memory_pressure is 0 or 1. But if so, why bother computing it at
> all?
As long as we get it right most of the time (and I think you
would agree that we do get it right most of the time), then
this flag has achieved its purpose.
> People seem to think that using atomic_t means they don't need to use
> a spinlock. That's fine if there is only one variable involved, but
> as soon as there's more than one, there's the possibility of a race,
> whether or not you use atomic_t, and whether or not atomic_read has
> "volatile" behaviour.
In any case, this actually illustrates why the addition of
volatile is completely pointless. Even if this code was
broken, which it definitely is not, having the volatile
there wouldn't have helped at all.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists