[rhelv6-list] Why clear_page_c be called so many times?

Kirby Zhou kirbyzhou at sogou-inc.com
Mon May 9 04:40:34 UTC 2011


Why clear_page_c be called so many times?

Vmstat shows that: RHEL5 takes less than 1% systime, but RHEL6 takes about
2% systime when running the same test case.
Through oprofile report I can see that: RHEL6 calls clear_page_c instead of
clear_page which is called by RHEL5.
It cause RHEL6 a bit slower than RHEL5.

RHEL6:
[@djt_10_47 ~]# opreport -l 2>/dev/null| head -n 30
CPU: Core 2, speed 1995.28 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit
mask of 0x00 (Unhalted core cycles) count 100000
samples  %        image name               app name                 symbol
name
19487513 52.3916  randio                   randio                   int*
std::merge<int*, int*, int*>(int*, int*, int*, int*, int*)
5763239  15.4943  randio                   randio                   int*
std::__unguarded_partition<int*, int>(int*, int*, int)
2545018   6.8422  randio                   randio                   void
std::__unguarded_linear_insert<int*, int>(int*, int)
2217630   5.9620  libc-2.12.so             libc-2.12.so
_wordcopy_fwd_aligned
1660154   4.4633  libc-2.12.so             libc-2.12.so             memmove
1270987   3.4170  randio                   randio                   void
std::__insertion_sort<int*>(int*, int*)
843683    2.2682  randio                   randio
do_cpu(TestCaseDesc const&, iorequest const*, void*, void*, int)
781289    2.1005  libc-2.12.so             libc-2.12.so             memcpy
587168    1.5786  randio                   randio                   void
std::__introsort_loop<int*, long>(int*, int*, long)
423389    1.1383  randio                   randio                   void
std::__final_insertion_sort<int*>(int*, int*)
====229786    0.6178  vmlinux                  vmlinux
clear_page_c
192168    0.5166  randio                   randio                   void
std::__merge_sort_loop<int*, int*, long>(int*, int*, int*, long)
130771    0.3516  randio                   randio                   void
std::__chunk_insertion_sort<int*, long>(int*, int*, long)
106343    0.2859  libc-2.12.so             libc-2.12.so
_wordcopy_bwd_aligned
74251     0.1996  vmlinux                  vmlinux
rb_get_reader_page
59752     0.1606  oprofiled                oprofiled
/usr/bin/oprofiled
57802     0.1554  vmlinux                  vmlinux
page_fault
49277     0.1325  vmlinux                  vmlinux
__alloc_pages_nodemask
38598     0.1038  vmlinux                  vmlinux
ring_buffer_consume
32242     0.0867  oprofile                 oprofile
/oprofile
25325     0.0681  vmlinux                  vmlinux
__mem_cgroup_commit_charge
23605     0.0635  vmlinux                  vmlinux
unmap_vmas
21904     0.0589  libc-2.12.so             libc-2.12.so
_wordcopy_fwd_dest_aligned
21577     0.0580  vmlinux                  vmlinux
__mem_cgroup_uncharge_common
19284     0.0518  vmlinux                  vmlinux                  list_del
17314     0.0465  vmlinux                  vmlinux
get_page_from_freelist
14563     0.0392  vmlinux                  vmlinux
copy_user_generic_string
[@djt_10_47]# (opreport -l | fgrep clear_page) 2> /dev/null 
229701    0.6180  vmlinux                  vmlinux
clear_page_c
161      4.3e-04  vmlinux                  vmlinux
clear_page
18       4.8e-05  vmlinux                  vmlinux
test_clear_page_writeback
10       2.7e-05  vmlinux                  vmlinux
clear_page_dirty_for_io
[@djt_10_47 ~]# uname -a
Linux djt_10_47 2.6.32-71.24.1.el6.x86_64 #1 SMP Sat Mar 26 16:05:19 EDT
2011 x86_64 x86_64 x86_64 GNU/Linux


RHEL5:
[@djt_10_48 ~]# opreport -l 2>/dev/null | head -n 30
CPU: CPU with timer interrupt, speed 0 MHz (estimated)
Profiling through timer interrupt
samples  %        app name                 symbol name
20661    54.4657  randio                   int* std::merge<int*, int*,
int*>(int*, int*, int*, int*, int*)
6089     16.0516  randio                   int*
std::__unguarded_partition<int*, int>(int*, int*, int)
2716      7.1598  randio                   void
std::__unguarded_linear_insert<int*, int>(int*, int)
1670      4.4024  libc-2.5.so              memmove
1428      3.7644  randio                   void
std::__insertion_sort<int*>(int*, int*)
1372      3.6168  libc-2.5.so              _wordcopy_fwd_aligned
940       2.4780  randio                   do_cpu(TestCaseDesc const&,
iorequest const*, void*, void*, int)
867       2.2855  libc-2.5.so              memcpy
646       1.7030  randio                   void std::__introsort_loop<int*,
long>(int*, int*, long)
446       1.1757  randio                   void
std::__final_insertion_sort<int*>(int*, int*)
214       0.5641  randio                   void std::__merge_sort_loop<int*,
int*, long>(int*, int*, int*, long)
187       0.4930  randio                   .plt
130       0.3427  randio                   void
std::__chunk_insertion_sort<int*, long>(int*, int*, long)
129       0.3401  libc-2.5.so              _wordcopy_bwd_aligned
====128       0.3374  vmlinux                  clear_page
31        0.0817  libc-2.5.so              _wordcopy_fwd_dest_aligned
29        0.0764  vmlinux                  do_page_fault
17        0.0448  vmlinux                  get_page_from_freelist
16        0.0422  bash                     /bin/bash
13        0.0343  vmlinux                  __handle_mm_fault
12        0.0316  bnx2                     /bnx2
11        0.0290  vmlinux                  unmap_vmas
10        0.0264  vmlinux                  __pagevec_lru_add_active
8         0.0211  ip_conntrack             /ip_conntrack
8         0.0211  vmlinux                  release_pages
6         0.0158  vmlinux                  free_hot_cold_page
5         0.0132  libc-2.5.so              _int_free
[@djt_10_48 ~]# (opreport -l | fgrep clear_page) 2> /dev/null 
128       0.3374  vmlinux                  clear_page 
[@djt_10_48 ~]# uname -a
Linux djt_10_48 2.6.18-238.9.1.el5 #1 SMP Fri Mar 18 12:42:39 EDT 2011
x86_64 x86_64 x86_64 GNU/Linux

Regards,
   Kirby Zhou    
   from   SOHU-RD   +86-10-6272-8261






More information about the rhelv6-list mailing list