[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1fe6c7900901251825m9c19371v946b08d5d2928d86@mail.gmail.com>
Date: Sun, 25 Jan 2009 18:25:04 -0800
From: Mandeep Baines <msb@...gle.com>
To: Frederic Weisbecker <fweisbec@...il.com>
Cc: Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
rientjes@...gle.com, mbligh@...gle.com, thockin@...gle.com
Subject: Re: [PATCH v3] softlockup: remove hung_task_check_count
Hi Frédéric,
Yeah, that might work. I think you can even embed waiters_count inside
the lock. I think this can be
done with minimal changes and with no overhead added to the lock
functions. Maybe use bits 7-0 for waiters_count and move the readers
count up 8 bits.
To make room for writers_count I'd change read_lock from:
static inline void __raw_read_lock(raw_rwlock_t *rw)
{
asm volatile(LOCK_PREFIX " subl $1,(%0)\n\t"
"jns 1f\n"
"call __read_lock_failed\n\t"
"1:\n"
::LOCK_PTR_REG (rw) : "memory");
}
To:
static inline void __raw_read_lock(raw_rwlock_t *rw)
{
asm volatile(LOCK_PREFIX " subl $0x100,(%0)\n\t"
"jns 1f\n"
"call __read_lock_failed\n\t"
"1:\n"
::LOCK_PTR_REG (rw) : "memory");
}
I think that's all you'd need to change for read_lock. For write_lock
you'd have to subtract (RW_LOCK_BIAS - 1) instead of just RW_LOCK_BIAS
during lock and add it back for unlock. You'd also have to change the
tests a little to ignore bits 7-0.
Regards,
Mandeep
On Sat, Jan 24, 2009 at 7:52 AM, Frederic Weisbecker <fweisbec@...il.com> wrote:
> On Fri, Jan 23, 2009 at 05:55:14PM -0800, Mandeep Singh Baines wrote:
>> Frédéric Weisbecker (fweisbec@...il.com) wrote:
>> > 2009/1/23 Ingo Molnar <mingo@...e.hu>:
>> > >
>> > > not sure i like the whole idea of removing the max iterations check. In
>> > > theory if there's a _ton_ of tasks, we could spend a lot of time looping
>> > > there. So it always looked prudent to limit it somewhat.
>> > >
>> >
>> > Which means we can loose several of them. Would it hurt to iterate as
>> > much as possible along the task list,
>> > keeping some care about writers starvation and latency?
>> > BTW I thought about the slow work framework, but I can't retrieve
>> > it.... But this thread has already a slow priority.
>> >
>> > Would it be interesting to provide a way for rwlocks to know if there
>> > is writer waiting for the lock?
>>
>> Would be cool if that API existed. You could release the CPU and/or lock as
>> soon as either was contended for. You'd have the benefits of fine-grained
>> locking without the overhead of locking and unlocking multiple time.
>>
>> Currently, there is no bit that can tell you there is a writer waiting. You'd
>> probably need to change the write_lock() implementation at a minimum. Maybe
>> if the first writer left the RW_LOCK_BIAS bit clear and then waited for the
>> readers to leave instead of re-trying? That would actually make write_lock()
>> more efficient for the 1-writer case since you'd only need to spin doing
>> a read in the failure case instead of an atomic_dec and atomic_inc.
>>
>
>
> This is already what is done in the slow path (in x86):
>
> /* rdi: pointer to rwlock_t */
> ENTRY(__write_lock_failed)
> CFI_STARTPROC
> LOCK_PREFIX
> addl $RW_LOCK_BIAS,(%rdi)
> 1: rep
> nop
> cmpl $RW_LOCK_BIAS,(%rdi)
> jne 1b
> LOCK_PREFIX
> subl $RW_LOCK_BIAS,(%rdi)
> jnz __write_lock_failed
> ret
> CFI_ENDPROC
> END(__write_lock_failed)
>
> It spins lurking at the lock value and only if there are no writers nor readers that
> own the lock, it restarts its atomic_sub (and then atomic_add in fail case).
>
> And if an implementation of writers_waiting_for_lock() is needed, I guess this
> is the perfect place. One atomic_add on a "waiters_count" on entry and an atomic_sub
> on it on exit.
>
> Since this is the slow_path, I guess that wouldn't really impact the performances....
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists