lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200801151057.39164.ak@suse.de>
Date:	Tue, 15 Jan 2008 10:57:39 +0100
From:	Andi Kleen <ak@...e.de>
To:	"Jan Beulich" <jbeulich@...ell.com>
Cc:	mingo@...e.hu, tglx@...utronix.de, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] [12/31] CPA: CLFLUSH support in change_page_attr()

On Tuesday 15 January 2008 09:40:27 Jan Beulich wrote:
> >-	/* clflush is still broken. Disable for now. */
> >-	if (1 || !cpu_has_clflush)
> >+	if (a->full_flush)
> > 		asm volatile("wbinvd" ::: "memory");
> >-	else list_for_each_entry(pg, l, lru) {
> >-		void *adr = page_address(pg);
> >-		clflush_cache_range(adr, PAGE_SIZE);
> >+	list_for_each_entry(f, &a->l, l) {
> >+		if (!a->full_flush)
> 
> This if() looks redundant (could also be avoided in the 32-bit variant, but
> isn't redundant there at present). Also, is there no
> wbinvd() on 64bit?

That's all done in a later patch.

The transformation steps are not always ideal, but in the end the code
is ok I think.

-Andi

The final result of the series is for 32bit & flush_kernel_map:

 static void flush_kernel_map(void *arg)
 {
-       struct list_head *lh = (struct list_head *)arg;
-       struct page *p;
+       struct flush_arg *a = (struct flush_arg *)arg;
+       struct flush *f;
+       int cache_flush = a->full_flush == FLUSH_CACHE;
+
+       list_for_each_entry(f, &a->l, l) {
+               if (!a->full_flush)
+                       __flush_tlb_one(f->addr);
+               if (f->mode == FLUSH_CACHE && !cpu_has_ss) {
+                       if (cpu_has_clflush)
+                               clflush_cache_range((void *)f->addr, PAGE_SIZE);
+                       else
+                               cache_flush++;
+               }
+       }
 
-       /* High level code is not ready for clflush yet */
-       if (0 && cpu_has_clflush) {
-               list_for_each_entry (p, lh, lru)
-                       cache_flush_page(p);
-       } else if (boot_cpu_data.x86_model >= 4)
-               wbinvd();
+       if (a->full_flush)
+               __flush_tlb_all();
 
-       /* Flush all to work around Errata in early athlons regarding 
-        * large page flushing. 
+       /*
+        * RED-PEN: Intel documentation ask for a CPU synchronization step
+        * here and in the loop. But it is moot on Self-Snoop CPUs anyways.
         */
-       __flush_tlb_all();      
+
+       if (cache_flush > 0 && !cpu_has_ss && boot_cpu_data.x86_model >= 4)
+               wbinvd();
 }

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ