[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANN689G8f2QuROecapFcbcNUggGWv9bTuHSV+k4KBLj=_E7uFg@mail.gmail.com>
Date: Mon, 11 Feb 2013 03:03:30 -0800
From: Michel Lespinasse <walken@...gle.com>
To: Jan Kara <jack@...e.cz>
Cc: LKML <linux-kernel@...r.kernel.org>, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH 1/6] lib: Implement range locks
On Mon, Feb 11, 2013 at 2:27 AM, Jan Kara <jack@...e.cz> wrote:
> On Sun 10-02-13 21:42:32, Michel Lespinasse wrote:
>> On Thu, Jan 31, 2013 at 1:49 PM, Jan Kara <jack@...e.cz> wrote:
>> > +void range_lock_init(struct range_lock *lock, unsigned long start,
>> > + unsigned long end);
>> > +void range_lock(struct range_lock_tree *tree, struct range_lock *lock);
>> > +void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);
>>
>> Is there a point to separating the init and lock stages ? maybe the API could be
>> void range_lock(struct range_lock_tree *tree, struct range_lock *lock,
>> unsigned long start, unsigned long last);
>> void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);
> I was thinking about this as well. Currently I don't have a place which
> would make it beneficial to separate _init and _lock but I can imagine such
> uses (where you don't want to pass the interval information down the stack
> and it's easier to pass the whole lock structure). Also it looks a bit
> confusing to pass (tree, lock, start, last) to the locking functon. So I
> left it there.
>
> OTOH I had to somewhat change the API so that the locking phase is now
> separated in "lock_prep" phase which inserts the node into the tree and
> counts blocking ranges and "wait" phase which waits for the blocking ranges
> to unlock. The reason for this split is that while "lock_prep" needs to
> happen under some lock synchronizing operations on the tree, "wait" phase
> can be easily lockless. So this allows me to remove the knowledge of how
> operations on the tree are synchronized from range locking code itself.
> That further allowed me to use mapping->tree_lock for synchronization and
> basically reduce the cost of mapping range locking close to 0 for buffered
> IO (just a single tree lookup in the tree in the fast path).
Ah yes, being able to externalize the lock is good.
I think in this case, it makes the most sense for lock_prep phase to
also initialize the lock node, though.
>> Reviewed-by: Michel Lespinasse <walken@...gle.com>
> I actually didn't add this because there are some differences in the
> current version...
Did I miss another posting of yours, or is that coming up ?
Cheers,
--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists