[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5333734E.2020600@mit.edu>
Date: Wed, 26 Mar 2014 17:39:42 -0700
From: Andy Lutomirski <luto@...capital.net>
To: jimmie.davis@...com.com, umgwanakikbuti@...il.com
CC: oneukum@...e.de, artem_fetishev@...m.com, peterz@...radead.org,
kosaki.motohiro@...fujitsu.com, linux-kernel@...r.kernel.org
Subject: Re: Bug 71331 - mlock yields processor to lower priority process
On 03/21/2014 07:50 AM, jimmie.davis@...com.com wrote:
>
> ________________________________________
> From: Mike Galbraith [umgwanakikbuti@...il.com]
> Sent: Friday, March 21, 2014 9:41 AM
> To: Davis, Bud @ SSG - Link
> Cc: oneukum@...e.de; artem_fetishev@...m.com; peterz@...radead.org; kosaki.motohiro@...fujitsu.com; linux-kernel@...r.kernel.org
> Subject: RE: Bug 71331 - mlock yields processor to lower priority process
>
> On Fri, 2014-03-21 at 14:01 +0000, jimmie.davis@...com.com wrote:
>
>> If you call mlock () from a SCHED_FIFO task, you expect it to return
>> when done. You don't expect it to block, and your task to be
>> pre-empted.
>
> Say some of your pages are sitting in an nfs swapfile orbiting Neptune,
> how do they get home, and what should we do meanwhile?
>
> -Mike
>
> Two options.
>
> #1. Return with a status value of EAGAIN.
>
> or
>
> #2. Don't return until you can do it.
>
> If SCHED_FIFO is used, and mlock() is called, the intention of the user is very clear. Run this task until
> it is completed or it blocks (and until a bit ago, mlock() did not block).
>
> SCHED_FIFO users don't care about fairness. They want the system to do what it is told.
I use mlock in real-time processes, but I do it in a separate thread.
Seriously, though, what do you expect the kernel to do? When you call
mlock on a page that isn't present, the kernel will *read* that page.
mlock will, therefore, block until the IO finishes.
Some time around 3.9, the behavior changed a little bit: IIRC mlock used
to hold mmap_sem while sleeping. Or maybe just mmap with MCL_FUTURE did
that. In any case, the mlock code is less lock-happy than it was. Is
it possible that you have two threads, and the non-mlock-calling thread
got blocked behind mlock, so it looked better?
--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists