lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <246771217.25641247.1659424106574.JavaMail.zimbra@desy.de>
Date:   Tue, 2 Aug 2022 09:08:26 +0200 (CEST)
From:   "Petrosyan, Ludwig" <ludwig.petrosyan@...y.de>
To:     linux-kernel <linux-kernel@...r.kernel.org>,
        linux-pci <linux-pci@...r.kernel.org>
Subject: interesting effect of the get_user_pages and copy_to_user

Dear Linux kernel team

I am working on PCIe device driver and need to transfer up to 20MB data,
unfortunately the device has no scatter/gather controller, so I have to do in usual way.

in the code it done in the following way (shortlty):

long    ioctl_dma(struct file *filp, unsigned int *cmd_p, unsigned long *arg_p){
 unsigned long  arg;
 arg                              = *arg_p;
 #define SG_MAX_ORDER    10
 max_order_length = (2<<(SG_MAX_ORDER-1))*PAGE_SIZE;
 ppWriteBuf = (void *)__get_free_pages(GFP_KERNEL , SG_MAX_ORDER);
 pTmpDmaHandle      = pci_map_single(pdev, pWriteBuf, max_order_length, PCI_DMA_FROMDEVICE);
 
 1. copy_from_user(&dma_data, (device_ioctrl_dma*)arg, (size_t)io_dma_size)) // get user buffer and data
 2. tmp_dma_size          = dma_data.dma_size;                               // get DMA size from user buffer 
 3. nr_entries = tmp_dma_size/max_order_length;                              // how many DMAs has to be done
 4. for(int i=0; i < nr_entries; ++i){
      Make DMA;
      pci_dma_sync_single_for_cpu(pdev, pTmpDmaHandle, max_order_length, PCI_DMA_FROMDEVICE);
      copy_to_user ((void *)(arg + tmp_user_offset), pWriteBuf, max_order_length)
      pci_dma_sync_single_for_device(pdev,pTmpDmaHandle, max_order_length, PCI_DMA_FROMDEVICE);
      tmp_user_offset += max_order_length;
    }
}
this work fine and gives in user application around 87ms for 20MB.
Than I just do:
get_user_pages( (unsigned long)arg,    // start 
		pDmaUnit->nr_pages,                   // length in pages 
		1,                                 // >0 --> write to user space 
		0,                                  // force. drivers should set 0 
		pDmaUnit->pages,
		NULL);

The DMA time goes to ~50ms (was 87ms)

regards

Ludwig

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ