[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE=gft4_M=9TsbHFEVQO-3BRrQsDia0Mwt0C9OpA6fMcnaDsrw@mail.gmail.com>
Date: Wed, 3 Aug 2022 13:48:08 -0700
From: Evan Green <evgreen@...omium.org>
To: Eric Biggers <ebiggers@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Matthew Garrett <mgarrett@...ora.tech>,
Daniil Lunev <dlunev@...gle.com>, zohar@...ux.ibm.com,
"James E.J. Bottomley" <jejb@...ux.ibm.com>,
linux-integrity@...r.kernel.org, Jonathan Corbet <corbet@....net>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Gwendal Grignou <gwendal@...omium.org>,
Jarkko Sakkinen <jarkko@...nel.org>,
Linux PM <linux-pm@...r.kernel.org>,
Len Brown <len.brown@...el.com>, Pavel Machek <pavel@....cz>,
"Rafael J. Wysocki" <rafael@...nel.org>
Subject: Re: [PATCH 08/10] PM: hibernate: Mix user key in encrypted hibernate
On Tue, Aug 2, 2022 at 3:48 PM Eric Biggers <ebiggers@...nel.org> wrote:
>
> On Wed, May 04, 2022 at 04:21:00PM -0700, Evan Green wrote:
> > +/*
> > + * Allow user mode to fold in key material for the data portion of the hibernate
> > + * image.
> > + */
> > +struct uswsusp_user_key {
> > + /* Kernel returns the metadata size. */
> > + __kernel_loff_t meta_size;
> > + __u32 key_len;
> > + __u8 key[16];
> > + __u32 pad;
> > +};
>
> Shouldn't the key field be 32 bytes?
>
Short answer: yes, it should, will fix. Long answer: I had used a
hardcoded AEAD algorithm of "gcm(aes)", and was envisioning it being
AES128. But making it accommodate 32 bytes now before this gets set in
stone is a better idea.
> > +/* Derive a key from the kernel and user keys for data encryption. */
> > +static int snapshot_use_user_key(struct snapshot_data *data)
> > +{
> > + struct shash_desc *desc;
> > + u8 digest[SHA256_DIGEST_SIZE];
> > + struct trusted_key_payload *payload;
> > + struct crypto_shash *tfm;
> > + int ret;
> > +
> > + tfm = crypto_alloc_shash("sha256", 0, 0);
> > + if (IS_ERR(tfm)) {
> > + ret = -EINVAL;
> > + goto err_rel;
> > + }
> > +
> > + desc = kmalloc(sizeof(struct shash_desc) +
> > + crypto_shash_descsize(tfm), GFP_KERNEL);
> > + if (!desc) {
> > + ret = -ENOMEM;
> > + goto err_rel;
> > + }
> > +
> > + desc->tfm = tfm;
> > + ret = crypto_shash_init(desc);
> > + if (ret != 0)
> > + goto err_free;
> > +
> > + /*
> > + * Hash the kernel key and the user key together. This folds in the user
> > + * key, but not in a way that gives the user mode predictable control
> > + * over the key bits. Hash in all 32 bytes of the key even though only 16
> > + * are in active use as extra salt.
> > + */
> > + payload = data->key->payload.data[0];
> > + crypto_shash_update(desc, payload->key, MIN_KEY_SIZE);
> > + crypto_shash_update(desc, data->user_key, sizeof(data->user_key));
> > + crypto_shash_final(desc, digest);
> > + ret = crypto_aead_setkey(data->aead_tfm,
> > + digest,
> > + SNAPSHOT_ENCRYPTION_KEY_SIZE);
> > +
> > +err_free:
> > + kfree(desc);
> > +
> > +err_rel:
> > + crypto_free_shash(tfm);
> > + return ret;
> > +}
>
> Just select CRYPTO_LIB_SHA256, and you can use sha256_init/update/final which
> would be much simpler. Similarly with sha256_data() that is added by the next
> patch; you could just call sha256().
Good idea, will do. Thanks!
>
> - Eric
Powered by blists - more mailing lists