lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 28 Nov 2009 10:47:38 -0500
From:	Michael Breuer <mbreuer@...jas.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Thomas Gleixner <tglx@...utronix.de>,
	Len Brown <lenb@...nel.org>,
	Arjan van de Ven <arjan@...radead.org>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	linux-kernel@...r.kernel.org
Subject: Re: Problem? intel_iommu=off; perf top shows acpi_os_read_port as
 extremely busy

Ok - did the following in runlevel 3 to avoid the dmar errors I'm 
getting with nouveau & vt-d.
In theory, the system was similarly loaded (i.e., doing pretty much 
nothing) for both runs.
The sample is consistent with what I've seen previously.

Perhaps there's no issue, or perhaps the issue is with my broken bios 
and intel_iommu=on.

Perf top with intel_iommu=off: (snapshop) - acpi_os_read_port is often 
#1 and I've seen it over 30%.
------------------------------------------------------------------------------
   PerfTop:    3957 irqs/sec  kernel:84.0% [100000 cycles],  (all, 8 CPUs)
------------------------------------------------------------------------------

             samples    pcnt   kernel function
             _______   _____   _______________

             3183.00 - 16.7% : _spin_lock
             3167.00 - 16.7% : acpi_os_read_port
             1053.00 -  5.5% : io_apic_modify_irq
              810.00 -  4.3% : hpet_next_event
              529.00 -  2.8% : _spin_lock_irqsave
              522.00 -  2.7% : io_apic_sync
              283.00 -  1.5% : tg_shares_up
              270.00 -  1.4% : acpi_idle_enter_bm
              259.00 -  1.4% : irq_to_desc
              222.00 -  1.2% : i8042_interrupt
              213.00 -  1.1% : acpi_hw_validate_io_request
              204.00 -  1.1% : ktime_get
              180.00 -  0.9% : find_busiest_group
              169.00 -  0.9% : _spin_unlock_irqrestore
              168.00 -  0.9% : sub_preempt_count

 Performance counter stats for 'sleep 1' (10 runs):

    8021.581362  task-clock-msecs         #      8.009 CPUs    ( +-   
0.033% )
            607  context-switches         #      0.000 M/sec   ( +-   
4.251% )
             27  CPU-migrations           #      0.000 M/sec   ( +-  
11.455% )
            408  page-faults              #      0.000 M/sec   ( +-  
34.557% )
      311405638  cycles                   #     38.821 M/sec   ( +-   
6.887% )
       85807775  instructions             #      0.276 IPC     ( +-  
13.824% )
        2300079  cache-references         #      0.287 M/sec   ( +-   
6.859% )
          77314  cache-misses             #      0.010 M/sec   ( +-  
11.184% )

    1.001616593  seconds time elapsed   ( +-   0.009% )

intel_iommu on:
------------------------------------------------------------------------------
   PerfTop:    9941 irqs/sec  kernel:81.9% [100000 cycles],  (all, 8 CPUs)
------------------------------------------------------------------------------

             samples    pcnt   kernel function
             _______   _____   _______________

            11465.00 - 20.8% : _spin_lock
             3679.00 -  6.7% : io_apic_modify_irq
             3295.00 -  6.0% : hpet_next_event
             2172.00 -  3.9% : _spin_lock_irqsave
             2111.00 -  3.8% : acpi_os_read_port
             1094.00 -  2.0% : io_apic_sync
              904.00 -  1.6% : find_busiest_group
              695.00 -  1.3% : _spin_unlock_irqrestore
              686.00 -  1.2% : tg_shares_up
              620.00 -  1.1% : acpi_idle_enter_bm
              577.00 -  1.0% : add_preempt_count
              568.00 -  1.0% : sub_preempt_count
              475.00 -  0.9% : audit_filter_syscall
              470.00 -  0.9% : schedule
              450.00 -  0.8% : tick_nohz_stop_sched_tick

 Performance counter stats for 'sleep 1' (10 runs):

    8015.967731  task-clock-msecs         #      8.003 CPUs    ( +-   
0.024% )
           2628  context-switches         #      0.000 M/sec   ( +-  
20.053% )
            124  CPU-migrations           #      0.000 M/sec   ( +-  
20.561% )
           3014  page-faults              #      0.000 M/sec   ( +-  
35.573% )
      850702031  cycles                   #    106.126 M/sec   ( +-  
10.601% )
      311032631  instructions             #      0.366 IPC     ( +-  
17.859% )
        8578386  cache-references         #      1.070 M/sec   ( +-  
13.894% )
         333768  cache-misses             #      0.042 M/sec   ( +-  
21.894% )

    1.001656333  seconds time elapsed   ( +-   0.008% )


Ingo Molnar wrote:
> * Michael Breuer <mbreuer@...jas.com> wrote:
>
>   
>> Having given up for now on VT-D, I rebooted 2.6.38 rc8 with 
>> intel_iommu=off. Whilst my myriad of broken bios issues cleared, I now 
>> see in perf top acpi_os_read_port as continually the busiest function. 
>> With intel_iommu enabled, _spin_lock was always on top, and nothing 
>> else was notable.
>>
>> This seems odd to me, perhaps this will make sense to someone else.
>>
>> FWIW, I'm running on an Asus p6t deluxe v2; ht enabled; no errors or 
>> oddities in dmesg or /var/log/messages.
>>     
>
> Could you post the perf top output please?
>
> Also, could you also post the output of:
>
> 	perf stat -a --repeat 10 sleep 1
>
> this will show us how idle the system is. (My guess is that your system 
> is idle and perf top shows acpi_os_read_port because the system goes to 
> idle via ACPI methods and PIO is slow. In that case all is nominal and 
> your system is fine. But it's hard to tell without more details.)
>
> Thanks,
>
> 	Ingo
>   

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists