I got a few report of Dashman running out of memory on some systems so I set out to investigate why was that and I found the memory usage of Dashman growing over time. Using JProfiler, it looked like this:

In the memory lane, green is the memory the Java process has available and blue is the memory it’s actually using. Whenever the memory usage would get close to maximum, it looks like the garbage collector would be run and Java would request more memory from the system. So, the next time it got close to full, it was more memory and thus the amount of memory given to the Java process would grow over time.

There were two improvements that I added. One was using a thread pool instead of starting threads willy nilly. This was actually very easy, specially since I centralized the creation of new threads in Dashman. I re-wrote this:

public static void run(Runnable runnable) {
    new Thread(runnable).start();
}

as this:

private static ExecutorService threadPool;

public static void run(Runnable runnable) {
    if (threadPool == null) {
      threadPool = Executors.newCachedThreadPool();
    }
    threadPool.submit(runnable);
}

The second change and the most significant is that I know how my application uses memory. When creating a new screenshot, it would generate a large bitmap of the screen, then it would encrypt it generate other large blobs of memory and finally upload it. Once a screenshot is uploaded, all that information is no longer needed and there might be a pause in processing until the next screenshot is needed. This is an excellent opportunity to run the garbage collector; so, I added this simple line at that point:

System.gc();

When retrieving a screenshot there’s a similar process. It’s not as memory intensive but the screenshot also needs decrypting. Once the screenshot is show, the displayer does nothing for some time, maybe a minute or more, so it’s also a perfect time to run the garbage collector by using that same line.

The result looks like this:

That, in my humble opinion, looks much better. Memory stays more constant over time. I’m still interested in possible improvements to memory usage, but rendering web sites and encrypting a large binary blob are both memory-intensive operations, so, I’m not sure how much gain can be obtained.

Both tests were performed with the exact same situation. I started Dashman with this configuration of sites:

Setting it to 10 seconds per screenshot made Dashman work quite a few cycles in the 3.5 minutes of testing. I run the displayer from Dashman itself so they were both in the same process and I maximize the displayer to use my full monitor, at 2560 x 1440 pixels to generate the most amount of work. I was using my local test server so, I didn’t properly test the effects of network delays during this exercise.