lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 13 May 2019 09:52:41 +0800 From: Dave Young <dyoung@...hat.com> To: Kairui Song <kasong@...hat.com> Cc: linux-kernel@...r.kernel.org, Rahul Lakkireddy <rahul.lakkireddy@...lsio.com>, Ganesh Goudar <ganeshgr@...lsio.com>, "David S . Miller" <davem@...emloft.net>, Eric Biederman <ebiederm@...ssion.com>, Alexey Dobriyan <adobriyan@...il.com>, Andrew Morton <akpm@...ux-foundation.org>, "kexec@...ts.infradead.org" <kexec@...ts.infradead.org> Subject: Re: [RFC PATCH] vmcore: Add a kernel cmdline device_dump_limit On 05/10/19 at 06:20pm, Kairui Song wrote: > Device dump allow drivers to add device related dump data to vmcore as > they want. This have a potential issue, the data is stored in memory, > drivers may append too much data and use too much memory. The vmcore is > typically used in a kdump kernel which runs in a pre-reserved small > chunk of memory. So as a result it will make kdump unusable at all due > to OOM issues. > > So introduce new device_dump_limit= kernel parameter, and set the > default limit to 0, so device dump is not enabled unless user specify > the accetable maxiam memory usage for device dump data. In this way user > will also have the chance to adjust the kdump reserved memory > accordingly. The device dump is only affective in kdump 2nd kernel, so add the limitation seems not useful. One is hard to know the correct size unless one does some crash test. If one did the test and want to eanble the device dump he needs increase crashkernel= size in 1st kernel and add the limit param in 2nd kernel. So a global on/off param sounds easier and better, something like vmcore_device_dump=on (default is off) > > Signed-off-by: Kairui Song <kasong@...hat.com> > --- > fs/proc/vmcore.c | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c > index 3fe90443c1bb..e28695ef2439 100644 > --- a/fs/proc/vmcore.c > +++ b/fs/proc/vmcore.c > @@ -53,6 +53,9 @@ static struct proc_dir_entry *proc_vmcore; > /* Device Dump list and mutex to synchronize access to list */ > static LIST_HEAD(vmcoredd_list); > static DEFINE_MUTEX(vmcoredd_mutex); > + > +/* Device Dump Limit */ > +static size_t vmcoredd_limit; > #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */ > > /* Device Dump Size */ > @@ -1465,6 +1468,11 @@ int vmcore_add_device_dump(struct vmcoredd_data *data) > data_size = roundup(sizeof(struct vmcoredd_header) + data->size, > PAGE_SIZE); > > + if (vmcoredd_orig_sz + data_size >= vmcoredd_limit) { > + ret = -ENOMEM; > + goto out_err; > + } > + > /* Allocate buffer for driver's to write their dumps */ > buf = vmcore_alloc_buf(data_size); > if (!buf) { > @@ -1502,6 +1510,18 @@ int vmcore_add_device_dump(struct vmcoredd_data *data) > return ret; > } > EXPORT_SYMBOL(vmcore_add_device_dump); > + > +static int __init parse_vmcoredd_limit(char *arg) > +{ > + char *end; > + > + if (!arg) > + return -EINVAL; > + vmcoredd_limit = memparse(arg, &end); > + return end > arg ? 0 : -EINVAL; > + > +} > +__setup("device_dump_limit=", parse_vmcoredd_limit); > #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */ > > /* Free all dumps in vmcore device dump list */ > -- > 2.20.1 > Thanks Dave
Powered by blists - more mailing lists