[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1270672170-31833-7-git-send-email-konrad.wilk@oracle.com>
Date: Wed, 7 Apr 2010 16:29:30 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To: FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
linux-kernel@...r.kernel.org, iommu@...ts.linux-foundation.org,
albert_herranz@...oo.es
Cc: linux@...elenboom.it, chrisw@...s-sol.org,
Ian.Campbell@...citrix.com, jeremy@...p.org, dwmw2@...radead.org,
alex.williamson@...com,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Subject: [PATCH 6/6] swiotlb: EXPORT_SYMBOL_GPL functions + variables that are defined in the header file.
Make the functions and variables that are now declared in the swiotlb.h
header file visible by the linker.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
---
lib/swiotlb.c | 8 ++++++++
1 files changed, 8 insertions(+), 0 deletions(-)
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 5443ad5..48f1e94 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -58,12 +58,14 @@ int swiotlb_force;
*/
char *swiotlb_tbl_start;
static char *io_tlb_end;
+EXPORT_SYMBOL_GPL(swiotlb_tbl_start);
/*
* The number of IO TLB blocks (in groups of 64) betweeen swiotlb_tbl_start and
* io_tlb_end. This is command line adjustable via setup_io_tlb_npages.
*/
unsigned long swiotlb_tbl_nslabs;
+EXPORT_SYMBOL_GPL(swiotlb_tbl_nslabs);
/*
* When the IOMMU overflows we return a fallback buffer. This sets the size.
@@ -313,6 +315,7 @@ int is_swiotlb_buffer(phys_addr_t paddr)
return paddr >= virt_to_phys(swiotlb_tbl_start) &&
paddr < virt_to_phys(io_tlb_end);
}
+EXPORT_SYMBOL_GPL(is_swiotlb_buffer);
/*
* Bounce: copy the swiotlb buffer back to the original dma location
@@ -354,6 +357,7 @@ void swiotlb_bounce(phys_addr_t phys, char *dma_addr, size_t size,
memcpy(phys_to_virt(phys), dma_addr, size);
}
}
+EXPORT_SYMBOL_GPL(swiotlb_bounce);
/*
* Allocates bounce buffer and returns its kernel virtual address.
@@ -461,6 +465,7 @@ found:
return dma_addr;
}
+EXPORT_SYMBOL_GPL(swiotlb_tbl_map_single);
/*
* dma_addr is the kernel virtual address of the bounce buffer to unmap.
@@ -506,6 +511,7 @@ swiotlb_tbl_unmap_single(struct device *hwdev, char *dma_addr, size_t size,
}
spin_unlock_irqrestore(&io_tlb_lock, flags);
}
+EXPORT_SYMBOL_GPL(swiotlb_tbl_unmap_single);
void
swiotlb_tbl_sync_single(struct device *hwdev, char *dma_addr, size_t size,
@@ -534,6 +540,7 @@ swiotlb_tbl_sync_single(struct device *hwdev, char *dma_addr, size_t size,
BUG();
}
}
+EXPORT_SYMBOL_GPL(swiotlb_tbl_sync_single);
void *
swiotlb_alloc_coherent(struct device *hwdev, size_t size,
@@ -626,6 +633,7 @@ swiotlb_full(struct device *dev, size_t size, enum dma_data_direction dir,
if (dir == DMA_TO_DEVICE)
panic("DMA: Random memory could be DMA read\n");
}
+EXPORT_SYMBOL_GPL(swiotlb_full);
/*
* Map a single buffer of the indicated size for DMA in streaming mode. The
--
1.6.2.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists