lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 12 Jul 2021 19:10:14 +0800
From:   Leo Yan <leo.yan@...aro.org>
To:     Suzuki K Poulose <suzuki.poulose@....com>
Cc:     Mathieu Poirier <mathieu.poirier@...aro.org>,
        Mike Leach <mike.leach@...aro.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        coresight@...ts.linaro.org, linux-arm-kernel@...ts.infradead.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] coresight: tmc-etr: Correct memory sync ranges in SG
 mode

On Mon, Jul 12, 2021 at 11:25:54AM +0100, Suzuki Kuruppassery Poulose wrote:
> Hi Leo,
> 
> On 10/07/2021 08:02, Leo Yan wrote:
> > Current code syncs the buffer range is [offset, offset+len), it doesn't
> > consider the case when the trace data is wrapped around, in this case
> > 'offset+len' is bigger than 'etr_buf->size'.  Thus it syncs buffer out
> > of the memory buffer, and it also misses to sync buffer from the start
> > of the memory.
> > 
> 
> I doubt this claim is valid. We do the sync properly, taking the page
> corresponding to the "offset" wrapping it around in "page" index.
> 
> Here is the code :
> 
> 
> 
> void tmc_sg_table_sync_data_range(struct tmc_sg_table *table,
>                                   u64 offset, u64 size)
> {
>         int i, index, start;
>         int npages = DIV_ROUND_UP(size, PAGE_SIZE);
>         struct device *real_dev = table->dev->parent;
>         struct tmc_pages *data = &table->data_pages;
> 
>         start = offset >> PAGE_SHIFT;
>         for (i = start; i < (start + npages); i++) {
>                 index = i % data->nr_pages;
>                 dma_sync_single_for_cpu(real_dev, data->daddrs[index],
>                                         PAGE_SIZE, DMA_FROM_DEVICE);
>         }
> }
> 
> 
> See that the npages accounts for the "size" requested and we wrap the
> "index" by the total number of pages in the buffer and pick the right
> page.
> 
> So, I think this fix is not needed.

Ouch, you are right :)  Let's drop these two patches.

Thanks,
Leo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ