Requires Solaris and Java continuous memory clarification

Background:

We have a vendor - supplied Java application that has a somewhat large Java heap There is not much information. The application is a black box for us, but we think we need to try to adjust the performance and solve the problem ourselves

The 64 bit SunOS 10 machine has 16GB of memory, and the only non system application running is the JVM of this application The 64 bit JVM runs in JBoss. I think it has nothing to do with this discussion. The maximum heap size is 8GB, which I think is relevant

The recent problem is that we have encountered various out of memory errors When these errors occur, the heap is not full and the error asks "out of swap space?" The supplier wants us to increase the exchange from 2GB to 4GB, which is on a 16GB system, and the application is only 8GB We don't think this is a good idea for performance

My question:

Therefore, one thing we found was that the file cache used up all the remaining available memory to improve performance It's usually not a problem, but it obviously destroys memory Since the hotspot JVM requires continuous memory space, we have learned that this memory fragmentation will lead to the use of non segmented swap space

However, I'm not sure if I understand the relationship between fragmentation and continuous memory requirements Of course, fragmentation only refers to the fragmentation of the physical RAM Using virtual memory, you can allocate a contiguous ram block without being supported by contiguous ram blocks In other words, discontinuous physical memory blocks are treated as continuous virtual memory blocks as running processes

So, I think, there is no sentence problem, but does anyone know more about this problem and can join? Any link to reference this continuous memory problem on a 64 bit system?

What have I found so far:

So far, every reference to the "continuous memory" problem I've found is related to the layout of the virtual address space in a 32-bit address system When we run a 64 bit system (I think 48 bit addressing), there is enough virtual address space to allocate large contiguous blocks

I've been looking for this information on the Internet, but so far I can't find the information I'm looking for

to update:

>For clarity, instead of trying to answer why I received an oom error, I tried to understand the relationship between the potentially fragmented system RAM and the continuous virtual memory blocks required by Java. > prstat -Z

ZONEID    NPROC  SWAP   RSS MEMORY      TIME  cpu ZONE  
     0       75 4270M 3855M    24%  92:24:03 0.3% global

> echo“:: memstat”| mdb -k

Page Summary                Pages                MB  %Tot    
------------     ----------------  ----------------  ----  
Kernel                     326177              2548   16%  
ZFS File Data              980558              7660   48%  
Anon                       561287              4385   27%  
Exec and libs               12196                95    1%  
Page cache                  17849               139    1%  
Free (cachelist)             4023                31    0%  
Free (freelist)            156064              1219    8%  

Total                     2058154             16079  
Physical                  2042090             15953

>I thought ZFS file data was free memory. I have learned that this is not the case, which may be the cause of the error. > vmstat 5 5

kthr      memory            page            disk          faults      cpu  
r b w   swap  free  re  mf pi po fr de sr vc vc vc --   in   sy   cs us sy id  
0 0 0 2161320 2831768 12 55 0  0  0  0  0  3  4 -0  0 1089 1320 1048  1  1 98  
0 0 0 819720 1505856 0  14  0  0  0  0  0  4  0  0  0 1189  748 1307  1  0 99  
0 0 0 819456 1505648 0   1  0  0  0  0  0  0  0  0  0 1024  729 1108  0  0 99  
0 0 0 819456 1505648 0   1  0  0  0  0  0  0  0  0  0  879  648  899  0  0 99  
0 0 0 819416 1505608 0   1  0  0  0  0  0  0  3  0  0 1000  688 1055  0  0 99

>These command outputs are executed when the application is running in a healthy state We are now monitoring all of the above and recording it in case we see swap space errors again. > The following is after the JVM grows to 8GB and then restarts As a result, ZFS arc shrinks (up to 26% RAM) until it grows again How does it look now? > vmstat 5 5

kthr      memory            page            disk          faults      cpu
r b w   swap  free  re  mf pi po fr de sr vc vc -- --   in   sy   cs us sy id
0 0 0 1372568 2749528 11 41 0  0  0  0  0  2  3  0  0  713  418  539  0  0 99
0 0 0 3836576 4648888 140 228 0 0 0  0  0  0  0  0  0 1178 5344 1117  3  2 95
0 0 0 3840448 4653744 16 45 0  0  0  0  0  0  0  0  0 1070 1013  952  1  3 96
0 0 0 3839168 4652720 6 53  0  0  0  0  0  0  0  0  0  564  575  313  0  6 93
0 0 0 3840208 4653752 7 68  0  0  0  0  0  3  0  0  0 1284 1014 1264  1  1 98

> swap -s

Total: 4341344k bytes allocated 675384k reserved = 5016728k used, 3840880k available

Solution

When an error message indicates that the swap space may not be large enough, I usually trust it and significantly increase the swap size

I suggest you do it first, up to 4 GB or even 8 GB, and see what happens The expansion of swaps has no impact on performance This is a common misconception The performance is affected by the lack of ram, not too large switching area

Only when the problem still exists after the change will I try to study alternative tracks, such as memory fragmentation

Edit:

From the output of your memstat, prstat and vmstat, it is obvious that your system has no virtual memory There is absolutely no need to investigate other abnormal causes, such as memory fragmentation You have more free RAM (~ 1.5g) than free virtual memory (~ 800MB) This means that there are many unused (not yet) memory reserves Again, just add some swap space to fix it Since you have enough ram, this will not have any impact on performance

Editor: (Part 2)

Now we know that you are using ZFS, because your application can use up to 8 GB (if we consider non heap memory, or even more). You should reduce the maximum arc size to allow these 8 GB to be available for the JVM immediately, rather than relying on the ongoing self adjustment of the operating system. At present, it may be confused due to the exchange of insufficient size For details on how to do this, see limiting the arc cache chapter in the ZFS evil tuning guide

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>