[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20061107231456.GB7796@elf.ucw.cz>
Date: Wed, 8 Nov 2006 00:14:56 +0100
From: Pavel Machek <pavel@....cz>
To: Mikulas Patocka <mikulas@...ax.karlin.mff.cuni.cz>
Cc: Albert Cahalan <acahalan@...il.com>, linux-kernel@...r.kernel.org
Subject: Re: 2048 CPUs [was: Re: New filesystem for Linux]
Hi!
> >Lets say time-spent-outside-spinlock == time-spent-in-spinlock and
> >number-of-cpus == 2.
> >
> >1 < 2 , so it should livelock according to you...
>
> There is off-by-one bug in the condition. It should be:
> (time_spent_in_spinlock + time_spent_outside_spinlock) /
> time_spent_in_spinlock < number_of_cpus
>
> ... or if you divide it by time_spent_in_spinlock:
> time_spent_outside_spinlock / time_spent_in_spinlock + 1 < number_of_cpus
>
> >...but afaict this should work okay. Even if spinlocks are very
> >unfair, as long as time-outside and time-inside comes in big chunks,
> >it should work.
> >
> >If you are unlucky, one cpu may stall for a while, but... I see no
> >livelock.
>
> If some rogue threads (and it may not even be intetional) call the same
> syscall stressing the one spinlock all the time, other syscalls needing
> the same spinlock may stall.
Fortunately, they'll unstall with probability of 1... so no, I do not
think this is real problem.
If someone takes semaphore in syscall (we do), same problem may
happen, right...? Without need for 2048 cpus. Maybe semaphores/mutexes
are fair (or mostly fair) these days, but rwlocks may not be or
something.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists