[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YGVy0WUG1OEFfjhx@dhcp22.suse.cz>
Date: Thu, 1 Apr 2021 09:14:25 +0200
From: Michal Hocko <mhocko@...e.com>
To: Kees Cook <keescook@...omium.org>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Alexey Dobriyan <adobriyan@...il.com>,
Lee Duncan <lduncan@...e.com>, Chris Leech <cleech@...hat.com>,
Adam Nichols <adam@...mm-co.com>,
linux-fsdevel@...r.kernel.org, linux-hardening@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] sysfs: Unconditionally use vmalloc for buffer
On Wed 31-03-21 19:21:45, Kees Cook wrote:
> The sysfs interface to seq_file continues to be rather fragile
> (seq_get_buf() should not be used outside of seq_file), as seen with
> some recent exploits[1]. Move the seq_file buffer to the vmap area
> (while retaining the accounting flag), since it has guard pages that
> will catch and stop linear overflows.
I thought the previous discussion has led to a conclusion that the
preferred way is to disallow direct seq_file buffer usage. But this is
obviously up to sysfs maintainers. I am happy you do not want to spread
this out to all seq_file users anymore.
> This seems justified given that
> sysfs's use of seq_file already uses kvmalloc(), is almost always using
> a PAGE_SIZE or larger allocation, has normally short-lived allocations,
> and is not normally on a performance critical path.
Let me clarify on this, because this is not quite right. kvmalloc vs
vmalloc (both with GFP_KERNEL) on PAGE_SIZE are two different beasts.
The first one is almost always going to use kmalloc because the page
allocator almost never fails those requests.
> Once seq_get_buf() has been removed (and all sysfs callbacks using
> seq_file directly), this change can also be removed.
>
> [1] https://blog.grimm-co.com/2021/03/new-old-bugs-in-linux-kernel.html
>
> Signed-off-by: Kees Cook <keescook@...omium.org>
> ---
> v3:
> - Limit to only sysfs (instead of all of seq_file).
> v2: https://lore.kernel.org/lkml/20210315174851.622228-1-keescook@chromium.org/
> v1: https://lore.kernel.org/lkml/20210312205558.2947488-1-keescook@chromium.org/
> ---
> fs/sysfs/file.c | 23 +++++++++++++++++++++++
> 1 file changed, 23 insertions(+)
>
> diff --git a/fs/sysfs/file.c b/fs/sysfs/file.c
> index 9aefa7779b29..70e7a450e5d1 100644
> --- a/fs/sysfs/file.c
> +++ b/fs/sysfs/file.c
> @@ -16,6 +16,7 @@
> #include <linux/mutex.h>
> #include <linux/seq_file.h>
> #include <linux/mm.h>
> +#include <linux/vmalloc.h>
>
> #include "sysfs.h"
>
> @@ -32,6 +33,25 @@ static const struct sysfs_ops *sysfs_file_ops(struct kernfs_node *kn)
> return kobj->ktype ? kobj->ktype->sysfs_ops : NULL;
> }
>
> +/*
> + * To be proactively defensive against sysfs show() handlers that do not
> + * correctly stay within their PAGE_SIZE buffer, use the vmap area to gain
> + * the trailing guard page which will stop linear buffer overflows.
> + */
> +static void *sysfs_kf_seq_start(struct seq_file *sf, loff_t *ppos)
> +{
> + struct kernfs_open_file *of = sf->private;
> + struct kernfs_node *kn = of->kn;
> +
> + WARN_ON_ONCE(sf->buf);
> + sf->buf = __vmalloc(kn->attr.size, GFP_KERNEL_ACCOUNT);
> + if (!sf->buf)
> + return ERR_PTR(-ENOMEM);
> + sf->size = kn->attr.size;
> +
> + return NULL + !*ppos;
> +}
> +
> /*
> * Reads on sysfs are handled through seq_file, which takes care of hairy
> * details like buffering and seeking. The following function pipes
> @@ -206,14 +226,17 @@ static const struct kernfs_ops sysfs_file_kfops_empty = {
> };
>
> static const struct kernfs_ops sysfs_file_kfops_ro = {
> + .seq_start = sysfs_kf_seq_start,
> .seq_show = sysfs_kf_seq_show,
> };
>
> static const struct kernfs_ops sysfs_file_kfops_wo = {
> + .seq_start = sysfs_kf_seq_start,
> .write = sysfs_kf_write,
> };
>
> static const struct kernfs_ops sysfs_file_kfops_rw = {
> + .seq_start = sysfs_kf_seq_start,
> .seq_show = sysfs_kf_seq_show,
> .write = sysfs_kf_write,
> };
> --
> 2.25.1
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists