[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f6d16e13-cdcb-2b6b-bfb4-1e02d4aed61d@suse.com>
Date: Thu, 25 Apr 2019 17:58:15 +0200
From: Matthias Brugger <mbrugger@...e.com>
To: Albert Vaca Cintora <albertvaka@...il.com>,
akpm@...ux-foundation.org, rdunlap@...radead.org, mingo@...nel.org,
Jan Kara <jack@...e.cz>, ebiederm@...ssion.com,
Nicolas Saenz Julienne <nsaenzjulienne@...e.de>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] kernel/ucounts: expose count of inotify watches in use
On 22/02/2019 18:58, Albert Vaca Cintora wrote:
> On Fri, Feb 1, 2019 at 9:42 PM Albert Vaca Cintora <albertvaka@...il.com> wrote:
>>
>> Adds a readonly 'current_inotify_watches' entry to the user sysctl table.
>> The handler for this entry is a custom function that ends up calling
>> proc_dointvec. Said sysctl table already contains 'max_inotify_watches'
>> and it gets mounted under /proc/sys/user/.
>>
>> Inotify watches are a finite resource, in a similar way to available file
>> descriptors. The motivation for this patch is to be able to set up
>> monitoring and alerting before an application starts failing because
>> it runs out of inotify watches.
>>
>> Signed-off-by: Albert Vaca Cintora <albertvaka@...il.com>
>> Acked-by: Jan Kara <jack@...e.cz>
>> Reviewed-by: Nicolas Saenz Julienne <nsaenzjulienne@...e.de>
>
> Friendly ping. Any comments on this?
>
Any comments on this? Just to make it clear, Albert found this problem while
working on montitoring software, so it fixes a real problem out there.
Regards,
Matthias
>> ---
>> kernel/ucount.c | 29 +++++++++++++++++++++++++++++
>> 1 file changed, 29 insertions(+)
>>
>> diff --git a/kernel/ucount.c b/kernel/ucount.c
>> index f48d1b6376a4..d8b11e53f098 100644
>> --- a/kernel/ucount.c
>> +++ b/kernel/ucount.c
>> @@ -57,6 +57,11 @@ static struct ctl_table_root set_root = {
>> .permissions = set_permissions,
>> };
>>
>> +#ifdef CONFIG_INOTIFY_USER
>> +int proc_read_inotify_watches(struct ctl_table *table, int write,
>> + void __user *buffer, size_t *lenp, loff_t *ppos);
>> +#endif
>> +
>> static int zero = 0;
>> static int int_max = INT_MAX;
>> #define UCOUNT_ENTRY(name) \
>> @@ -79,6 +84,12 @@ static struct ctl_table user_table[] = {
>> #ifdef CONFIG_INOTIFY_USER
>> UCOUNT_ENTRY("max_inotify_instances"),
>> UCOUNT_ENTRY("max_inotify_watches"),
>> + {
>> + .procname = "current_inotify_watches",
>> + .maxlen = sizeof(int),
>> + .mode = 0444,
>> + .proc_handler = proc_read_inotify_watches,
>> + },
>> #endif
>> { }
>> };
>> @@ -226,6 +237,24 @@ void dec_ucount(struct ucounts *ucounts, enum ucount_type type)
>> put_ucounts(ucounts);
>> }
>>
>> +#ifdef CONFIG_INOTIFY_USER
>> +int proc_read_inotify_watches(struct ctl_table *table, int write,
>> + void __user *buffer, size_t *lenp, loff_t *ppos)
>> +{
>> + struct ucounts *ucounts;
>> + struct ctl_table fake_table;
>> + int count;
>> +
>> + ucounts = get_ucounts(current_user_ns(), current_euid());
>> + count = atomic_read(&ucounts->ucount[UCOUNT_INOTIFY_WATCHES]);
>> + put_ucounts(ucounts);
>> +
>> + fake_table.data = &count;
>> + fake_table.maxlen = sizeof(count);
>> + return proc_dointvec(&fake_table, write, buffer, lenp, ppos);
>> +}
>> +#endif
>> +
>> static __init int user_namespace_sysctl_init(void)
>> {
>> #ifdef CONFIG_SYSCTL
>> --
>> 2.20.1
>>
>
Powered by blists - more mailing lists