lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat,  6 Jun 2020 14:32:19 +0800
From:   Tiezhu Yang <yangtiezhu@...ngson.cn>
To:     Alexander Viro <viro@...iv.linux.org.uk>,
        Jonathan Corbet <corbet@....net>
Cc:     linux-fsdevel@...r.kernel.org, linux-doc@...r.kernel.org,
        linux-kernel@...r.kernel.org, Xuefeng Li <lixuefeng@...ngson.cn>
Subject: [PATCH 2/3] fs: Introduce cmdline argument exceed_file_max_panic

It is important to ensure that files that are opened always get closed.
Failing to close files can result in file descriptor leaks. One common
answer to this problem is to just raise the limit of open file handles
and then restart the server every day or every few hours, this is not
a good idea for long-lived servers if there is no leaks.

If there exists file descriptor leaks, when file-max limit reached, we
can see that the system can not work well and at worst the user can do
nothing, it is even impossible to execute reboot command due to too many
open files in system. In order to reboot automatically to recover to the
normal status, introduce a new cmdline argument exceed_file_max_panic for
user to control whether to call panic in this case.

We can reproduce this problem used with the following simple test:

[yangtiezhu@...ux ~]$ cat exceed_file_max_test.c
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>

int main()
{
	int fd;

	while (1) {
		fd = open("/usr/include/stdio.h", 0444);
		if (fd == -1)
			fprintf(stderr, "%s\n", "open failed");
	}

	return 0;
}
[yangtiezhu@...ux ~]$ cat exceed_file_max_test.sh
#!/bin/bash

gcc exceed_file_max_test.c -o exceed_file_max_test.bin -Wall

while true
do
	./exceed_file_max_test.bin >/dev/null 2>&1 &
done
[yangtiezhu@...ux ~]$ sh exceed_file_max_test.sh &
[yangtiezhu@...ux ~]$ reboot
bash: start pipeline: pgrp pipe: Too many open files in system
bash: /usr/sbin/reboot: Too many open files in system

Signed-off-by: Tiezhu Yang <yangtiezhu@...ngson.cn>
---
 fs/file_table.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/fs/file_table.c b/fs/file_table.c
index 26516d0..6943945 100644
--- a/fs/file_table.c
+++ b/fs/file_table.c
@@ -121,6 +121,17 @@ static struct file *__alloc_file(int flags, const struct cred *cred)
 	return f;
 }
 
+static bool exceed_file_max_panic;
+static int __init exceed_file_max_panic_setup(char *str)
+{
+	pr_info("Call panic when exceed file-max limit\n");
+	exceed_file_max_panic = true;
+
+	return 1;
+}
+
+__setup("exceed_file_max_panic", exceed_file_max_panic_setup);
+
 /* Find an unused file structure and return a pointer to it.
  * Returns an error pointer if some error happend e.g. we over file
  * structures limit, run out of memory or operation is not permitted.
@@ -159,6 +170,9 @@ struct file *alloc_empty_file(int flags, const struct cred *cred)
 	if (get_nr_files() > old_max) {
 		pr_info("VFS: file-max limit %lu reached\n", get_max_files());
 		old_max = get_nr_files();
+
+		if (exceed_file_max_panic)
+			panic("VFS: Too many open files in system\n");
 	}
 	return ERR_PTR(-ENFILE);
 }
-- 
2.1.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ