[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DB9PR04MB9284A45033B3E24F44C5AA3987F2A@DB9PR04MB9284.eurprd04.prod.outlook.com>
Date: Mon, 11 Sep 2023 06:13:09 +0000
From: Hui Fang <hui.fang@....com>
To: Tomasz Figa <tfiga@...omium.org>
CC: Anle Pan <anle.pan@....com>,
"m.szyprowski@...sung.com" <m.szyprowski@...sung.com>,
"mchehab@...nel.org" <mchehab@...nel.org>,
"linux-media@...r.kernel.org" <linux-media@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Jindong Yue <jindong.yue@....com>,
Xuegang Liu <xuegang.liu@....com>
Subject: RE: [EXT] Re: [PATCH] media: videobuf2-dma-sg: limit the sg segment
size
On Wed, Sep 6, 2023 at 18:28 PM Tomasz Figa <tfiga@...omium.org> wrote:
> That all makes sense, but it still doesn't answer the real question on why
> swiotlb ends up being used. I think you may want to trace what happens in
> the DMA mapping ops implementation on your system causing it to use
> swiotlb.
Add log and feed invalid data to low buffer on purpose,
it's confirmed that swiotlb is actually used.
Got log as
"[ 846.570271][ T138] software IO TLB: ==== swiotlb_bounce: DMA_TO_DEVICE,
dst 000000004589fa38, src 00000000c6d7e8d8, srcPhy 5504139264, size 4096".
" srcPhy 5504139264" is larger than 4G (8mp has DRAM over 5G).
And "CONFIG_ZONE_DMA32=y" in kernel config, so swiotlb static is used.
Also, the host (win10) side can't get valid image.
Code as below.
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
index 7f83a86e6810..de03704ce695 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
@@ -98,6 +98,7 @@ static int vb2_dma_sg_alloc_compacted(struct vb2_dma_sg_buf *buf,
return 0;
}
+bool g_v4l2 = false;
static void *vb2_dma_sg_alloc(struct vb2_buffer *vb, struct device *dev,
unsigned long size)
{
@@ -144,6 +145,7 @@ static void *vb2_dma_sg_alloc(struct vb2_buffer *vb, struct device *dev,
if (ret)
goto fail_table_alloc;
+ g_v4l2 = true;
pr_info("==== vb2_dma_sg_alloc, call sg_alloc_table_from_pages_segment,
size %d, max_segment %d\n", (int)size, (int)max_segment);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index dac01ace03a0..a2cda646a02f 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -523,6 +523,7 @@ static unsigned int swiotlb_align_offset(struct device *dev, u64 addr)
return addr & dma_get_min_align_mask(dev) & (IO_TLB_SIZE - 1);
}
+extern bool g_v4l2;
/*
* Bounce: copy the swiotlb buffer from or back to the original dma location
*/
@@ -591,8 +592,19 @@ static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size
}
} else if (dir == DMA_TO_DEVICE) {
memcpy(vaddr, phys_to_virt(orig_addr), size);
+ if (g_v4l2) {
+ static unsigned char val;
+ val++;
+ memset(vaddr, val, size);
+
+ pr_info("====xx %s: DMA_TO_DEVICE, dst %p, src %p, srcPhy %llu, size %zu\n",
+ __func__, vaddr, phys_to_virt(orig_addr), orig_addr, size);
+ }
} else {
memcpy(phys_to_virt(orig_addr), vaddr, size);
}
}
BRs,
Fang Hui
Powered by blists - more mailing lists