lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0611101606090.20654@artax.karlin.mff.cuni.cz>
Date:	Fri, 10 Nov 2006 16:20:38 +0100 (CET)
From:	Mikulas Patocka <mikulas@...ax.karlin.mff.cuni.cz>
To:	Pavel Machek <pavel@....cz>
Cc:	Albert Cahalan <acahalan@...il.com>, linux-kernel@...r.kernel.org
Subject: Re: 2048 CPUs [was: Re: New filesystem for Linux]

Hi!

>>>> If some rogue threads (and it may not even be intetional) call the same
>>>> syscall stressing the one spinlock all the time, other syscalls needing
>>>> the same spinlock may stall.
>>>
>>> Fortunately, they'll unstall with probability of 1... so no, I do not
>>> think this is real problem.
>>
>> You can't tell that CPUs behave exactly probabilistically --- it may
>> happen that one gets out of the wait loop always too late.
>
> Well,  I don't need them to be _exactly_ probabilistical.
>
> Anyway, if you have 2048 CPUs... you can perhaps get some non-broken
> ones.

No intel document guarantees you that if more CPUs simultaneously execute 
locked cmpxchg in a loop that a CPU will see compare success in a finite 
time. In fact, CPUs can't guarantee this at all, because they don't know 
that they're executing a spinlock --- for them its just an instruction 
stream like anything else.

Intel only guarantees that cmpxchg (or any other instruction) completes in 
finite time, but it doesn't say anything about the result of it.

>>> If someone takes semaphore in syscall (we do), same problem may
>>> happen, right...? Without need for 2048 cpus. Maybe semaphores/mutexes
>>> are fair (or mostly fair) these days, but rwlocks may not be or
>>> something.
>>
>> Scheduler increases priority of sleeping process, so starving process
>> should be waken up first. But if there are so many processes, that
>> process
>
> I do not think this is how Linux scheduler works.
> 								Pavel

<= 2.4 scheduler worked exactly like this. 2.6 has it more complicated, 
but does similar thing. But you are right that starvation on semaphore can 
happen, if the process has too high nice value, it will never be risen 
above other processes.

Mikulas
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ