lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2efdad85-ba50-1246-e60b-eadbb82c88e6@quicinc.com>
Date: Wed, 22 Oct 2025 12:02:35 +0530
From: Md Sadre Alam <quic_mdalam@...cinc.com>
To: Eric Biggers <ebiggers@...nel.org>
CC: <adrian.hunter@...el.com>, <ulf.hansson@...aro.org>,
        <linux-arm-msm@...r.kernel.org>, <linux-mmc@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <quic_varada@...cinc.com>
Subject: Re: [PATCH v2] mmc: sdhci-msm: Enable ICE support for non-cmdq eMMC
 devices

Hi,

On 10/22/2025 11:14 AM, Eric Biggers wrote:
> On Wed, Oct 22, 2025 at 10:49:23AM +0530, Md Sadre Alam wrote:
>> Hi,
>>
>> On 10/17/2025 11:08 PM, Eric Biggers wrote:
>>> On Tue, Oct 14, 2025 at 03:05:03PM +0530, Md Sadre Alam wrote:
>>>> Enable Inline Crypto Engine (ICE) support for eMMC devices that operate
>>>> without Command Queue Engine (CQE).This allows hardware-accelerated
>>>> encryption and decryption for standard (non-CMDQ) requests.
>>>>
>>>> This patch:
>>>> - Adds ICE register definitions for non-CMDQ crypto configuration
>>>> - Implements a per-request crypto setup via sdhci_msm_ice_cfg()
>>>> - Hooks into the request path via mmc_host_ops.request
>>>> - Initializes ICE hardware during CQE setup for compatible platforms
>>>>
>>>> With this, non-CMDQ eMMC devices can benefit from inline encryption,
>>>> improving performance for encrypted I/O while maintaining compatibility
>>>> with existing CQE crypto support.
>>>>
>>>> Signed-off-by: Md Sadre Alam <quic_mdalam@...cinc.com>
>>>
>>> How was this tested?
>> I tested this using fscrypt on a Phison eMMC device. However, since that
>> particular eMMC does not support CMDQ, inline encryption (ICE) was bypassed
>> during testing.
> 
> What do you mean by "inline encryption (ICE) was bypassed during
> testing"?
By "inline encryption (ICE) was bypassed during testing," I meant that 
encryption was not working because ICE was only being enabled in the CQE 
request path (cqhci_request). For eMMC devices that do not support CMDQ, 
the mmc core sends requests via the legacy path (sdhci_request), where 
ICE was not being configured.
> 
>> +static int sdhci_msm_ice_cfg(struct sdhci_host *host, struct mmc_request *mrq,
>> +			     u32 slot)
> 
> Could you also remove the unused 'slot' parameter from this function?
Ok
> 
>>>> @@ -2185,6 +2241,18 @@ static int sdhci_msm_cqe_add_host(struct sdhci_host *host,
>>>>    	if (ret)
>>>>    		goto cleanup;
>>>> +	/* Initialize ICE for non-CMDQ eMMC devices */
>>>> +	config = sdhci_readl(host, HC_VENDOR_SPECIFIC_FUNC4);
>>>> +	config &= ~DISABLE_CRYPTO;
>>>> +	sdhci_writel(host, config, HC_VENDOR_SPECIFIC_FUNC4);
>>>> +	ice_cap = cqhci_readl(cq_host, CQHCI_CAP);
>>>> +	if (ice_cap & ICE_HCI_SUPPORT) {
>>>> +		config = cqhci_readl(cq_host, CQHCI_CFG);
>>>> +		config |= CRYPTO_GENERAL_ENABLE;
>>>> +		cqhci_writel(cq_host, config, CQHCI_CFG);
>>>> +	}
>>>> +	sdhci_msm_ice_enable(msm_host);
>>>
>>> This is after __sdhci_add_host() was called, which is probably too late.
>> ok,I’ll move the ICE initialization earlier in the probe flow, ideally
>> before __sdhci_add_host() is called.
>>>
>>>> +#ifdef CONFIG_MMC_CRYPTO
>>>> +	host->mmc_host_ops.request = sdhci_msm_request;
>>>> +#endif
>>>>    	/* Set the timeout value to max possible */
>>>>    	host->max_timeout_count = 0xF;
>>>
>>> A lot of the code in this patch also seems to actually run on
>>> CQE-capable hosts.  Can you explain?  Why is it needed?  Is there any
>>> change in behavior on them?
>> Thanks for raising this. You're right that some parts of the patch interact
>> with CQE-related structures, such as cqhci_host, even on CQE-capable hosts.
>> However, the intent is to reuse existing CQE infrastructure (like register
>> access helpers and memory-mapped regions) to configure ICE for non-CMDQ
>> requests.
>>
>> Importantly, actual CQE functionality is only enabled if the eMMC device
>> advertises CMDQ support. For devices without CMDQ, the CQE engine remains
>> disabled, and the request path falls back to standard non-CMDQ flow. In this
>> case, we're simply leveraging the CQE register space to program ICE
>> parameters.
>>
>> So while the code runs on CQE-capable hosts, there's no change in behavior
>> for CMDQ-enabled devices — the patch does not interfere with CQE operation.
>> It only enables ICE for non-CMDQ requests when supported by the platform.
> 
> So, we're dealing only with hosts that do support a command queue, but
> support eMMC devices either with or without using it?
There are two cases where ICE support is needed without CMDQ:

1) The eMMC device does not support CMDQ, but we still want to use ICE 
    for encryption/decryption.

2) The host intentionally disables CMDQ, even if the eMMC device 
supports it, and wants to use ICE in the legacy (non-CMDQ) path.

This patch addresses the first case — enabling ICE for devices that lack 
CMDQ support. I'm currently working on the host-side logic to support 
the second case, and will submit that separately.

> 
> Could you explain why sdhci_msm_ice_enable() is called twice: once from
> sdhci_msm_cqe_add_host() and once from sdhci_msm_cqe_enable()?
Thanks for pointing this out. sdhci_msm_ice_enable() is called twice 
only when the eMMC device supports CMDQ — once during 
sdhci_msm_cqe_add_host() and again in sdhci_msm_cqe_enable(). For 
non-CMDQ devices, it is called only once.

Since the function primarily performs register configuration, the second 
call effectively reprograms the same values and has no functional side 
effects. That said, I’ll look into adding a condition to avoid redundant 
configuration when ICE is already enabled, to make the flow cleaner.

Thanks,
Alam.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ