lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 27 Feb 2010 11:59:47 -0600
From:	Robert Hancock <>
To:	David Miller <>
Subject: Re: [RFC PATCH] fix problems with NETIF_F_HIGHDMA in networking 

On Sat, Feb 27, 2010 at 3:53 AM, David Miller <> wrote:
> From: Robert Hancock <>
> Date: Fri, 26 Feb 2010 21:08:04 -0600
>> That seems like a reasonable approach to me. Only question is how to
>> implement the check for DMA_64BIT. Can we just check page_to_phys on
>> each of the pages in the skb to see if it's > 0xffffffff ? Are there
>> any architectures where it's more complicated than that?
> On almost every platform it's "more complicated than that".
> This is the whole issue.  What matters is the final DMA address and
> since we have IOMMUs and the like, it is absolutely not tenable to
> solve this by checking physical address attributes.

Yeah, physical address isn't quite right. There is a precedent for
such a check in the block layer though - look at
blk_queue_bounce_limit in block/blk-settings.c:

void blk_queue_bounce_limit(struct request_queue *q, u64 dma_mask)
        unsigned long b_pfn = dma_mask >> PAGE_SHIFT;
        int dma = 0;

        q->bounce_gfp = GFP_NOIO;
#if BITS_PER_LONG == 64
         * Assume anything <= 4GB can be handled by IOMMU.  Actually
         * some IOMMUs can handle everything, but I don't know of a
         * way to test this here.
        if (b_pfn < (min_t(u64, 0xffffffffUL, BLK_BOUNCE_HIGH) >> PAGE_SHIFT))
                dma = 1;
        q->limits.bounce_pfn = max_low_pfn;
        if (b_pfn < blk_max_low_pfn)
                dma = 1;
        q->limits.bounce_pfn = b_pfn;
        if (dma) {
                q->bounce_gfp = GFP_NOIO | GFP_DMA;
                q->limits.bounce_pfn = b_pfn;

and then in mm/bounce.c:

static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig,
                               mempool_t *pool)
        struct page *page;
        struct bio *bio = NULL;
        int i, rw = bio_data_dir(*bio_orig);
        struct bio_vec *to, *from;

        bio_for_each_segment(from, *bio_orig, i) {
                page = from->bv_page;

                 * is destination page below bounce pfn?
                if (page_to_pfn(page) <= queue_bounce_pfn(q))

Following that logic then, it appears that page_to_pfn(page) >
(0xffffffff >> PAGE_SHIFT) should tell us what we want to know for the
DMA_64BIT flag.. or am I missing something?
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to
More majordomo info at

Powered by blists - more mailing lists