[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091117074739.4abaef85@tlielax.poochiereds.net>
Date: Tue, 17 Nov 2009 07:47:39 -0500
From: Jeff Layton <jlayton@...hat.com>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
samba-technical@...ts.samba.org, Steve French <sfrench@...ba.org>,
linux-mm <linux-mm@...ck.org>, kosaki.motohiro@...fujitsu.com,
Andrew Morton <akpm@...ux-foundation.org>,
linux-cifs-client@...ts.samba.org
Subject: Re: [PATCH 6/7] cifs: Don't use PF_MEMALLOC
On Tue, 17 Nov 2009 16:22:32 +0900 (JST)
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com> wrote:
>
> Non MM subsystem must not use PF_MEMALLOC. Memory reclaim need few
> memory, anyone must not prevent it. Otherwise the system cause
> mysterious hang-up and/or OOM Killer invokation.
>
> Cc: Steve French <sfrench@...ba.org>
> Cc: linux-cifs-client@...ts.samba.org
> Cc: samba-technical@...ts.samba.org
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
> ---
> fs/cifs/connect.c | 1 -
> 1 files changed, 0 insertions(+), 1 deletions(-)
>
> diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
> index 63ea83f..f9b1553 100644
> --- a/fs/cifs/connect.c
> +++ b/fs/cifs/connect.c
> @@ -337,7 +337,6 @@ cifs_demultiplex_thread(struct TCP_Server_Info *server)
> bool isMultiRsp;
> int reconnect;
>
> - current->flags |= PF_MEMALLOC;
> cFYI(1, ("Demultiplex PID: %d", task_pid_nr(current)));
>
> length = atomic_inc_return(&tcpSesAllocCount);
This patch appears to be safe for CIFS. I believe that the demultiplex
thread only does mempool allocations currently. The only other case
where it did an allocation was recently changed with the conversion of
the oplock break code to use slow_work.
Barring anything I've missed...
Acked-by: Jeff Layton <jlayton@...hat.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists