[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAOJsxLGcndOEEDzeKJaEiLrwV779R+hv2dPvqBrxbr0FzczpUg@mail.gmail.com>
Date: Wed, 20 Jun 2012 10:02:29 +0300
From: Pekka Enberg <penberg@...nel.org>
To: KOSAKI Motohiro <kosaki.motohiro@...il.com>
Cc: David Mackey <tdmackey@...tter.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, rientjes@...gle.com,
Andi Kleen <ak@...ux.intel.com>, cl@...ux.com
Subject: Re: [PATCH v5] slab/mempolicy: always use local policy from interrupt context
On Mon, Jun 18, 2012 at 11:20 AM, KOSAKI Motohiro
<kosaki.motohiro@...il.com> wrote:
> (6/9/12 5:40 AM), David Mackey wrote:
>> From: Andi Kleen<ak@...ux.intel.com>
>>
>> From: Andi Kleen<ak@...ux.intel.com>
>>
>> slab_node() could access current->mempolicy from interrupt context.
>> However there's a race condition during exit where the mempolicy
>> is first freed and then the pointer zeroed.
>>
>> Using this from interrupts seems bogus anyways. The interrupt
>> will interrupt a random process and therefore get a random
>> mempolicy. Many times, this will be idle's, which noone can change.
>>
>> Just disable this here and always use local for slab
>> from interrupts. I also cleaned up the callers of slab_node a bit
>> which always passed the same argument.
>>
>> I believe the original mempolicy code did that in fact,
>> so it's likely a regression.
>>
>> v2: send version with correct logic
>> v3: simplify. fix typo.
>> Reported-by: Arun Sharma<asharma@...com>
>> Cc: penberg@...nel.org
>> Cc: cl@...ux.com
>> Signed-off-by: Andi Kleen<ak@...ux.intel.com>
>> [tdmackey@...tter.com: Rework control flow based on feedback from
>> cl@...ux.com, fix logic, and cleanup current task_struct reference]
>> Signed-off-by: David Mackey<tdmackey@...tter.com>
>
> Acked-by: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Applied, thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists