Best practices for dealing with memory leaks in large Java projects?
In almost all of the larger Java projects I've worked on, I've noticed that the quality of service of the application decreases with the uptime of the container This is most likely due to a memory leak in the code
The correct way to solve this problem is obviously to trace back to the root cause of the problem and fix the leaks in the code The quick and dirty way to solve the problem is to restart Tomcat (or any servlet container you're using)
Here are my three questions:
>Suppose you chose to solve the problem by tracking the root cause of the problem (memory leak), how would you collect data to amplify the problem? Suppose you choose a fast and dirty way, just restart the container. How will you collect data to choose the best restart cycle? > Can you deploy and run the project for a long time without restarting the servlet container to regain the buffer? Or does the occasional servlet restart something that one must simply accept?
Solution
Heap dump using jmap and load dump using eclipse memory analyzer From there, we can analyze which objects are eating the most memory, which "roots" prevent other objects from being collected, etc
There are other heap analyzers, such as jhat, but I find EMA the fastest and best (free) solution
Use JMX to monitor heap size and other heap and GC statistics
Yes By avoiding / repairing memory leaks