lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 16 Jan 2018 15:00:57 -0500
From:   "Zi Yan" <zi.yan@...rutgers.edu>
To:     "Vinod Koul" <vinod.koul@...el.com>
Cc:     "Dan Williams" <dan.j.williams@...el.com>,
        dmaengine@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] dmaengine: avoid map_cnt overflow with
 CONFIG_DMA_ENGINE_RAID

On 12 Jan 2018, at 11:56, Vinod Koul wrote:

> On Mon, Jan 08, 2018 at 10:50:50AM -0500, Zi Yan wrote:
>> From: Zi Yan <zi.yan@...rutgers.edu>
>>
>> When CONFIG_DMA_ENGINE_RAID is enabled, unmap pool size can reach to
>> 256. But in struct dmaengine_unmap_data, map_cnt is only u8, wrapping
>> to 0, if the unmap pool is maximally used. This triggers BUG() when
>> struct dmaengine_unmap_data is freed. Use u16 to fix the problem.
>>
>> Signed-off-by: Zi Yan <zi.yan@...rutgers.edu>
>> ---
>>  include/linux/dmaengine.h | 4 ++++
>>  1 file changed, 4 insertions(+)
>>
>> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
>> index f838764993eb..861be5cab1df 100644
>> --- a/include/linux/dmaengine.h
>> +++ b/include/linux/dmaengine.h
>> @@ -470,7 +470,11 @@ typedef void 
>> (*dma_async_tx_callback_result)(void *dma_async_param,
>>  				const struct dmaengine_result *result);
>>
>>  struct dmaengine_unmap_data {
>> +#if IS_ENABLED(CONFIG_DMA_ENGINE_RAID)
>> +	u16 map_cnt;
>> +#else
>>  	u8 map_cnt;
>> +#endif
>>  	u8 to_cnt;
>>  	u8 from_cnt;
>>  	u8 bidi_cnt;
>
> Would that cause adverse performance, the data structure is not 
> aligned
> anymore. Dan was that a consideration while adding this?
>

It will be only two more cache misses (one for map the data, the other 
for unmap the data)
for each DMA engine operation, no matter what data size is. And there is 
no impact on
the actual DMA transfers. So the impact should be minimal.

—
Best Regards,
Yan Zi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ