lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPDyKFqBWEdAzz0hjk7LhqX1D8qmOomHSS=Be+_vU=upxMr0aA@mail.gmail.com>
Date:   Mon, 23 Nov 2020 13:18:38 +0100
From:   Ulf Hansson <ulf.hansson@...aro.org>
To:     Bhaskara Budiredla <bbudiredla@...vell.com>
Cc:     Kees Cook <keescook@...omium.org>,
        Colin Cross <ccross@...roid.com>,
        Tony Luck <tony.luck@...el.com>,
        Sunil Kovvuri Goutham <sgoutham@...vell.com>,
        "linux-mmc@...r.kernel.org" <linux-mmc@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [EXT] Re: [PATCH v1 1/2] mmc: Support kmsg dumper based on pstore/blk

[...]

> >
> >As I said above, I would like to avoid host specific deployments from being
> >needed. Is there a way we can avoid this?
> >
>
> I don't see an alternative.

Well, if not, can you please explain why?

[...]

> >> +
> >> +void mmcpstore_card_set(struct mmc_card *card, const char *disk_name)
> >> +{
> >> +       struct mmcpstore_context *cxt = &oops_cxt;
> >> +       struct pstore_blk_config *conf = &cxt->conf;
> >> +       struct pstore_device_info *dev = &cxt->dev;
> >> +       struct block_device *bdev;
> >> +       struct mmc_command *stop;
> >> +       struct mmc_command *cmd;
> >> +       struct mmc_request *mrq;
> >> +       struct mmc_data *data;
> >> +       int ret;
> >> +
> >> +       if (!conf->device[0])
> >> +               return;
> >> +
> >> +       /* Multiple backend devices not allowed */
> >> +       if (cxt->dev_name[0])
> >> +               return;
> >> +
> >> +       bdev =  mmcpstore_open_backend(conf->device);
> >> +       if (IS_ERR(bdev)) {
> >> +               pr_err("%s failed to open with %ld\n",
> >> +                               conf->device, PTR_ERR(bdev));
> >> +               return;
> >> +       }
> >> +
> >> +       bdevname(bdev, cxt->dev_name);
> >> +       cxt->partno = bdev->bd_part->partno;
> >> +       mmcpstore_close_backend(bdev);
> >> +
> >> +       if (strncmp(cxt->dev_name, disk_name, strlen(disk_name)))
> >> +               return;
> >> +
> >> +       cxt->start_sect = mmc_blk_get_part(card, cxt->partno, &cxt->size);
> >> +       if (!cxt->start_sect) {
> >> +               pr_err("Non-existent partition %d selected\n", cxt->partno);
> >> +               return;
> >> +       }
> >> +
> >> +       /* Check for host mmc panic write polling function definitions */
> >> +       if (!card->host->ops->req_cleanup_pending ||
> >> +                       !card->host->ops->req_completion_poll)
> >> +               return;
> >> +
> >> +       cxt->card = card;
> >> +
> >> +       cxt->sub = kmalloc(conf->kmsg_size, GFP_KERNEL);
> >> +       if (!cxt->sub)
> >> +               goto out;
> >> +
> >> +       mrq = kzalloc(sizeof(struct mmc_request), GFP_KERNEL);
> >> +       if (!mrq)
> >> +               goto free_sub;
> >> +
> >> +       cmd = kzalloc(sizeof(struct mmc_command), GFP_KERNEL);
> >> +       if (!cmd)
> >> +               goto free_mrq;
> >> +
> >> +       stop = kzalloc(sizeof(struct mmc_command), GFP_KERNEL);
> >> +       if (!stop)
> >> +               goto free_cmd;
> >> +
> >> +       data = kzalloc(sizeof(struct mmc_data), GFP_KERNEL);
> >> +       if (!data)
> >> +               goto free_stop;
> >> +
> >> +       mrq->cmd = cmd;
> >> +       mrq->data = data;
> >> +       mrq->stop = stop;
> >> +       cxt->mrq = mrq;
> >> +
> >> +       dev->total_size = cxt->size;
> >> +       dev->flags = PSTORE_FLAGS_DMESG;
> >> +       dev->read = mmcpstore_read;
> >> +       dev->write = mmcpstore_write;
> >> +       dev->erase = NULL;
> >> +       dev->panic_write = mmcpstore_panic_write;
> >> +
> >> +       ret = register_pstore_device(&cxt->dev);
> >
> >By looking at all of the code above, lots are duplicated from the mmc block
> >device implementation. Isn't there a way to make the pstore block device to
> >push a request through the regular blk-mq path instead?
> >
> The regular path has pre, post processing’s and locking semantics that
> are not suitable for panic write scenario. Further, the locking mechanisms are
> implemented in host drivers. This is preferred to quickly complete the write
> before the kernel dies.

I am sorry, but this doesn't make sense to me.

When it comes to complete the data write, the regular block I/O path
is supposed to be optimized. If there is a problem with this path,
then we should fix it, rather than adding a new path along the side
(unless there are very good reasons not to).

>
> >That said, I wonder why you don't call register_pstore_blk(), as I thought that
> >was the interface to be used for regular block devices, no?
> >
> register_pstore_blk() is for arbitrary block devices for which best effort is not defined.

Exactly why isn't "best effort" good enough for mmc?

As there are no other users of register_pstore_blk(), it makes me
wonder, when it should be used then?

[...]

> >> +
> >> +static void __exit mmcpstore_exit(void) {
> >> +       struct mmcpstore_context *cxt = &oops_cxt;
> >> +
> >> +       unregister_pstore_device(&cxt->dev);
> >> +       kfree(cxt->mrq->data);
> >> +       kfree(cxt->mrq->stop);
> >> +       kfree(cxt->mrq->cmd);
> >> +       kfree(cxt->mrq);
> >> +       kfree(cxt->sub);
> >> +       cxt->card = NULL;
> >
> >Can we do this via mmc_blk_remove() instead?
> >
> The unregisters here are related to mmcpstore, nothing specific to card.

I am not sure I understand. If a card is removed, which has been
registered for pstore - then what should we do?

At least, it looks like a card removal will trigger a life cycle issue
for the allocated data structures. No?

[...]

Kind regards
Uffe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ