The default GLASS setup starts a ‘maintenance gem’ that performs two tasks: (1) expiring Seaside sessions and (2) performing repository-wide garbage collection (‘mark for collection’ or MFC). This maintenance gem is configured to use up to 200 MB for temporary object space and with other memory allocations, such as for persistent objects, the total memory usage can be twice that amount. It remains logged in continually expires sessions every minute and doing an MFC every hour.
The original no-cost license allowed for up to 1 GB shared page cache (SPC) and up to 4 GB for the total repository size. In a busy system where you could have at least 25% of the object space in memory (and had adequate memory for each of the Gems) doing an hourly MFC was important (to avoid having excess garbage) and not too expensive.
In some situations, however, the maintenance gem may be causing excess overhead. If you are running GLASS in the “cloud” (e.g., on SliceHost or some other virtual server), then the cost of RAM may provide a significant constraint on your SPC. Also, if your system is not used heavily, then there might not be enough garbage (primarily from expired sessions) to justify frequent MFC. Finally, since the no-cost license now allows for unlimited repository size, the consequence of running out of space is not so dire.
This is important because MFC is a comparatively heavy-weight operation and can take a lot of time. On some larger customer databases, it can take many days and the user experience often suffers. The size of the database is really not the most important factor; more important is the percent of the database that can fit in the SPC. If you are running in the cloud with 512 MB of RAM and are allowed only 1/8th of a CPU, then a 2 GB database in a 256 MB SPC could see a significant decrease in performance during an MFC.
To determine the necessary MFC frequency, take a look at the maintenance gem log. This will show the number of sessions expired each minute and the number of possible dead objects found during each MFC. You can go through the log and add up the possible dead during a 24-hour period. The next step is to go to the admingcgem log and look for an entry labeled “Starting doSweepWsUnion” with a timestamp shortly after the MFC completed. This should show a “PD size” that is approximately the number reported by the MFC. A few lines down will be a couple entries showing a count of objects removed from the possibleDead. Just before the next “Starting doSweepWsUnion” will be a final (and generally lower) “possible dead size”. You can take the total of these final sizes to get an estimate of the useful work done by the MFC.
In one case, we saw hourly MFCs take about 10 minutes each and over the course of a day the repository garbage collection process found 1.5 million dead objects. If each object is about 120 bytes, this is less than 200 MB. In this case it was worthwhile to switch to a daily MFC and do it during off-peak time when it would not affect the users.
To implement this change required an edit to $GEMSTONE/seaside/bin/runSeasideGems[30] to comment out the lines to start the maintenance gem. Instead, you can create a new script modeled on $GEMSTONE/seaside/bin/startMaintenance[30] that does not have an endless loop for expiring sessions and doing an MFC but does the work once then exits. You can then set up a cron job to call your new script.
I realize that I’m not providing all the details of the new setup here. The goal is more to provide an exploration of alternatives.
5 comments
Comments feed for this article
October 5, 2011 at 10:53 am
Dale Henrichs
James,
I would be concerned about not expiring sessions on a regular basis, so I’m not sure that it is a good idea to stop running the maintenance vm altogether, but perhaps you aren’t recommending to stop doing session expiration … it’s not clear.
You don’t mention which version you are using, but when using Seaside3.0, the class WAGemStoneMaintenanceTask is used to control what tasks are performed by the maintenance vm and it is relatively easy to change the schedule for tasks or even remove them (see http://code.google.com/p/glassdb/wiki/MaintenanceVMTasks).
You are absolutely correct that the schedule for MFC should be changed for a production installation. Because of Issue 136 (http://code.google.com/p/glassdb/issues/detail?id=136) it is a good idea to recycle the MFC on a regular basis, so breaking the MFC out into a separate task is a good idea …
Again
October 5, 2011 at 11:04 am
James Foster
Yes, sessions do need to be expired sometime. My suggestion was a variation on the startMaintenance script that does a login, expires sessions (and possibly an MFC), and then a logout; and call that script periodically from a cron job. The frequency could be every 15 minutes, every hour, or something else based on analysis of the number of sessions expired.
October 5, 2011 at 11:35 am
Dale Henrichs
Is there a reason that you don’t mention going with epoch gc? It would seem that if you are going to change things one should consider running epoch gcs on a regular basis (instead of an MFC) to reap most if not all of the session garbage and then running MFC relatively infrequently to clean up business garbage…
October 5, 2011 at 11:50 am
James Foster
Epoch GC is certainly something to consider as an alternative to MFC. The reason I didn’t discuss it here is because the post was getting long and in the case that prompted the discussion a nightly MFC seemed to be sufficient. Epoch GC is sufficiently complex that another blog post would probably be appropriate.
October 8, 2011 at 2:58 pm
Norbert Hartl
Nice to read this article. I changed the maintenance script a long time ago to do a oneshot operation. I divided the script into the session expiry part and the MFC part. I like to utilize them exactly in different time intervals.
I think Dales approach is a good one to configure from the smalltalk side what the maintenance should do.
I’m doing the exact opposite. I’m having a directory in each stone directory where I can throw in snippets that are executed on the stone on a frequent basis. In the next release of the stone-creator utility I like to integrate some of this.
And thanks for mentioning epoch GC, Dale. I’ll definitely have a look because I think it can make a smooth experience.