lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 21 Mar 2014 07:54:46 -0700
From:	Motohiro Kosaki <>
To:	"" <>,
	"" <>
CC:	"" <>,
	"" <>,
	Motohiro Kosaki JP <>,
	"" <>
Subject: RE: Bug 71331 - mlock yields processor to lower priority process

> Mike,
> There are several problem domains where you protect critical sections by assigning multiple threads to a single CPU and use priorities
> and SCHED_FIFO to ensure data integrity.
> In this kind of design you don't make many syscalls.  The ones you do make, have to be clearly understood
> if they block.
> So, yes, I expect that a SCHED_FIFO task, that uses a subset of syscalls known to be non-blocking, will not block.
> If it is not 'unstoppable', then there is a defect in the OS.
> In the past, a call to mlock() was known to be OK.  It would not block.  It might take a while, but it would run to completion.  It does not
> do that any more.

False. Mlock is blockable since it was born.
Mlock and mlockall need memory allocate by definition. And it could lead to run VM activity and it may block. At least, on Linux.

lru_add_drain_all() is not only place to wait. Even if we remove it, mlock can still block. I don't think this discussion make sense.

> If mlock() is now a blocking call, then fine.  It only needs to be called on occasion, and this can be accounted for in the application

Now? I have not seen any recent change.

Note: I'm not sure Artem's use-case is good or bad.  I only say the false assumption don't make a good discussion.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists