[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150831120525.GA31015@redhat.com>
Date: Mon, 31 Aug 2015 14:05:25 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Al Viro <viro@...iv.linux.org.uk>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Maciej Żenczykowski <maze@...gle.com>
Subject: change filp_close() to use __fput_sync() ? (Was: [PATCH]
task_work: remove fifo ordering guarantee)
On 08/29, Eric Dumazet wrote:
>
> On Sat, 2015-08-29 at 14:49 +0200, Oleg Nesterov wrote:
> > On 08/28, Eric Dumazet wrote:
> > >
> > > From: Eric Dumazet <edumazet@...gle.com>
> > >
> > > In commit f341861fb0b ("task_work: add a scheduling point in
> > > task_work_run()") I fixed a latency problem adding a cond_resched()
> > > call.
> > >
> > > Later, commit ac3d0da8f329 added yet another loop to reverse a list,
> > > bringing back the latency spike :
> > >
> > > I've seen in some cases this loop taking 275 ms, if for example a
> > > process with 2,000,000 files is killed.
> > >
> > > We could add yet another cond_resched() in the reverse loop,
> >
> > Can't we do this?
>
> Well, I stated in the changelog we could. Obviously we can.
>
> Adding 275 ms of pure overhead to perform this list reversal for files
> to be closed is quite unfortunate.
Well, if the first loop takes 275 ms, then probably the next one which
actually does a lot of __fput's takes much, much more time, so perhaps
these 275 ms are not very noticable. Ignoring the latency problem.
But of course, this is not good, I agree. Please see below.
> > Fifo just looks more sane to me.
>
> Well, files are closed in a random order. These are the main user of
> this stuff.
This is the most "heavy" user. But task_works is the generic API.
> Now we also could question why we needed commit
> 4a9d4b024a3102fc083c925c242d98ac27b1c5f6 ("switch fput to task_work_add
> ") since it seems quite an overhead at task exit with 10^6 of files to
> close.
How about the patch below? I didn't try to test it yet, but since
filp_close() does ->flush() I think __fput_sync() should be safe here.
Al, what do you think?
Oleg.
--- x/fs/file_table.c
+++ x/fs/file_table.c
@@ -292,11 +292,8 @@ void fput(struct file *file)
*/
void __fput_sync(struct file *file)
{
- if (atomic_long_dec_and_test(&file->f_count)) {
- struct task_struct *task = current;
- BUG_ON(!(task->flags & PF_KTHREAD));
+ if (atomic_long_dec_and_test(&file->f_count))
__fput(file);
- }
}
EXPORT_SYMBOL(fput);
--- x/fs/open.c
+++ x/fs/open.c
@@ -1074,7 +1074,7 @@ int filp_close(struct file *filp, fl_owner_t id)
dnotify_flush(filp, id);
locks_remove_posix(filp, id);
}
- fput(filp);
+ __fput_sync(filp);
return retval;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists