lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 18 Sep 2017 20:40:05 +0530
From:   Shivasharan Srikanteshwara 
To:     Christoph Hellwig <>,
Cc:     Kashyap Desai <>,
        Sumit Saxena <>,,,
        "PDL,MEGARAIDLINUX" <>,,,,,
Subject: RE: [PATCH V2] megaraid: kmemleak: Track page allocation for fusion

> -----Original Message-----
> From: Christoph Hellwig []
> Sent: Friday, September 15, 2017 11:30 PM
> To:
> Cc:;;
>;; linux-
> Subject: Re: [PATCH V2] megaraid: kmemleak: Track page allocation for
> I think the megaraid fusion code has a deeper problem here.
> Instead of playing weird games with get_free_pages and vmalloc the
> just needs to shrink by moving all the arrays of MAX_MSIX_QUEUES_FUSION
> size into a separate allocation for each, and then we have normall,
> kmalloc allocations.

Hi Christoph,
We understand your suggestion on shrinking the size of fusion_context so
that we can use kmalloc to allocate the structure.
Size of fusion_context structure now is about 179kB and it is contributed
almost entirely by log_to_span array (~176kB).
The rest of the arrays do not make as much difference to the size.
We will send a new patch that separates allocation for log_to_span array
from fusion_context.
For now it is a Nack for this patch then.

crash> struct -o fusion_context
struct fusion_context {
       [0] struct megasas_cmd_fusion **cmd_list;
       [8] dma_addr_t req_frames_desc_phys;
      [16] u8 *req_frames_desc;
      [24] struct dma_pool *io_request_frames_pool;
      [32] dma_addr_t io_request_frames_phys;
      [40] u8 *io_request_frames;
      [48] struct dma_pool *sg_dma_pool;
      [56] struct dma_pool *sense_dma_pool;
      [64] dma_addr_t reply_frames_desc_phys[128];
    [1088] union MPI2_REPLY_DESCRIPTORS_UNION *reply_frames_desc[128];
    [2112] struct dma_pool *reply_frames_desc_pool;
    [2120] u16 last_reply_idx[128];
    [2376] u32 reply_q_depth;
    [2380] u32 request_alloc_sz;
    [2384] u32 reply_alloc_sz;
    [2388] u32 io_frames_alloc_sz;
    [2392] struct MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY *rdpq_virt;
    [2400] dma_addr_t rdpq_phys;
    [2408] u16 max_sge_in_main_msg;
    [2410] u16 max_sge_in_chain;
    [2412] u8 chain_offset_io_request;
    [2413] u8 chain_offset_mfi_pthru;
    [2416] struct MR_FW_RAID_MAP_DYNAMIC *ld_map[2];
    [2432] dma_addr_t ld_map_phys[2];
    [2448] struct MR_DRV_RAID_MAP_ALL *ld_drv_map[2];
    [2464] u32 max_map_sz;
    [2468] u32 current_map_sz;
    [2472] u32 old_map_sz;
    [2476] u32 new_map_sz;
    [2480] u32 drv_map_sz;
    [2484] u32 drv_map_pages;
    [2488] struct MR_PD_CFG_SEQ_NUM_SYNC *pd_seq_sync[2];
    [2504] dma_addr_t pd_seq_phys[2];
    [2520] u8 fast_path_io;
    [2528] struct LD_LOAD_BALANCE_INFO *load_balance_info;
    [2536] u32 load_balance_info_pages;
    [2544] LD_SPAN_INFO log_to_span[256];
  [182768] u8 adapter_type;
  [182776] struct LD_STREAM_DETECT **stream_detect_by_ld; }
SIZE: 182784


Powered by blists - more mailing lists