[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120117135921.3f68701e.akpm@linux-foundation.org>
Date: Tue, 17 Jan 2012 13:59:21 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Kees Cook <keescook@...omium.org>
Cc: Tony Luck <tony.luck@...el.com>,
Marco Stornelli <marco.stornelli@...il.com>,
Arnd Bergmann <arnd@...db.de>,
Greg Kroah-Hartman <greg@...ah.com>,
Randy Dunlap <rdunlap@...otime.net>, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4] ramoops: use pstore interface
On Sat, 7 Jan 2012 09:15:16 -0800
Kees Cook <keescook@...omium.org> wrote:
> Instead of using /dev/mem directly, use the common pstore infrastructure
> to handle Oops gathering and extraction.
um, why? This changelog provides no reason for anyone to apply the
patch!
>
> ...
>
> -static void ramoops_do_dump(struct kmsg_dumper *dumper,
> - enum kmsg_dump_reason reason, const char *s1, unsigned long l1,
> - const char *s2, unsigned long l2)
> +static int ramoops_pstore_open(struct pstore_info *psi)
> +{
> + struct ramoops_context *cxt = (struct ramoops_context *)psi->data;
Unneeded and undesirable cast of void*. Multiple instances of this.
> + cxt->read_count = 0;
> + return 0;
> +}
> +
> +static ssize_t ramoops_pstore_read(u64 *id, enum pstore_type_id *type,
> + struct timespec *time,
> + char **buf,
> + struct pstore_info *psi)
> +{
> + ssize_t size;
> + char *rambuf;
> + struct ramoops_context *cxt = (struct ramoops_context *)psi->data;
> +
> + if (cxt->read_count >= cxt->max_count)
> + return -EINVAL;
> + *id = cxt->read_count++;
> + /* Only supports dmesg output so far. */
> + *type = PSTORE_TYPE_DMESG;
> + /* TODO(kees): Bogus time for the moment. */
Is this hard to fix now?
> + time->tv_sec = 0;
> + time->tv_nsec = 0;
> +
> + rambuf = cxt->virt_addr + (*id * cxt->record_size);
> + size = strnlen(rambuf, cxt->record_size);
> + *buf = kmalloc(size, GFP_KERNEL);
> + if (*buf == NULL)
> + return -ENOMEM;
> + memcpy(*buf, rambuf, size);
> +
> + return size;
> +}
Note that pstore_get_records() will treat the -ve errno returns from
->read() in the same manner as EOF. IOW, your error codes will be
dropped on the floor. This appears to be a bug in pstore_get_records().
> +static int ramoops_pstore_write(enum pstore_type_id type,
> + enum kmsg_dump_reason reason,
> + u64 *id,
> + unsigned int part,
> + size_t size, struct pstore_info *psi)
> {
> - struct ramoops_context *cxt = container_of(dumper,
> - struct ramoops_context, dump);
> - unsigned long s1_start, s2_start;
> - unsigned long l1_cpy, l2_cpy;
> - int res, hdr_size;
> - char *buf, *buf_orig;
> + char *buf;
> + size_t res;
> struct timeval timestamp;
> + struct ramoops_context *cxt = (struct ramoops_context *)psi->data;
> + size_t available = cxt->record_size;
>
> + /* Only store dmesg dumps. */
> + if (type != PSTORE_TYPE_DMESG)
> + return -EINVAL;
> +
> + /* Only store crash dumps. */
> if (reason != KMSG_DUMP_OOPS &&
> - reason != KMSG_DUMP_PANIC &&
> - reason != KMSG_DUMP_KEXEC)
> - return;
> + reason != KMSG_DUMP_PANIC)
> + return -EINVAL;
>
> /* Only dump oopses if dump_oops is set */
> if (reason == KMSG_DUMP_OOPS && !cxt->dump_oops)
The above three comments describe what the code does, which was
obvious. They failed to describe why it does this, which was
unobvious. Sigh.
> - return;
> + return -EINVAL;
> +
> + /* Explicitly only take the first part of any new crash.
> + * If our buffer is larger than kmsg_bytes, this can never happen,
> + * and if our buffer is smaller than kmsg_bytes, we don't want the
> + * report split across multiple records. */
> + if (part != 1)
> + return -ENOSPC;
>
> buf = cxt->virt_addr + (cxt->count * cxt->record_size);
> - buf_orig = buf;
>
> - memset(buf, '\0', cxt->record_size);
> res = sprintf(buf, "%s", RAMOOPS_KERNMSG_HDR);
> buf += res;
> + available -= res;
> +
> do_gettimeofday(×tamp);
>
> ...
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists