[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <p73velxju18.fsf@verdi.suse.de>
Date: 03 Nov 2006 00:41:39 +0100
From: Andi Kleen <ak@...e.de>
To: Mikulas Patocka <mikulas@...ax.karlin.mff.cuni.cz>
Cc: linux-kernel@...r.kernel.org
Subject: Re: New filesystem for Linux
Mikulas Patocka <mikulas@...ax.karlin.mff.cuni.cz> writes:
> new method to keep data consistent in case of crashes (instead
> of journaling),
What is that method?
> * There is a rw semaphore that is locked for read for nearly all
Depending on the length of the critical section rw locks are often
not faster than non rw locks because the read case has to bounce
around the cache line of the lock anyways and they're actually
a little more expensive.
> * This leads to another observation --- on i386 locking a semaphore is
> 2 instructions, on x86_64 it is a call to two nested functions. Has it
The second call should be a tail call, i.e. just a jump
The first call isn't needed on a non debug kernel, but doing the
two unconditional jumps should be basically free on a modern OOO CPU.
The actual implementation is spinlock based vs atomic based for i386.
This was because at some point nobody could benchmark a difference
between the two and the spinlock based version is much easier
to verify and to understand. If you can demonstrate a difference
that could be reevaluated.
> some reason or was it just implementator's laziness? Given the fact
> that locked instruction takes 16 ticks on Opteron (and can overlap
> about 2 ticks with other instructions), it would make sense to have
> optimized semaphores too.
In the last time we're going more for saved icache and better
use of branch predictors (who are more happy with less branch locations
to cache) I would expect the call/ret to be executed
mostly in parallel with the other code anyways.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists