This adds two settings to bukkit.yml, allowing activation and control of
two chunk garbage collection triggering conditions:
chunk-gc/period-in-ticks controls a periodic GC, run once every N ticks
(default is 600); chunk-gc/load-threshold causes the GC to run once
after every N calls to loadChunk() on a given world (this call is an API
call used by plugins, and is distinct from the path taken for routine
player movement-based loading). In both cases, setting to zero will
disable the given GC scheduling strategy.
In either case, the act of doing the GC is simply one of scanning the
loaded chunks, seeing which are NOT being used by one or more players
(due to view-distance) and which are not already queued for unload, and
queueing them for a normal unload. Ultimately, the unload is then
processed the same as if the chunk were unloaded due to leaving the
view-distance range of all players, so the impact on plugins should be
no different (and strategies such as handling the ChunkUnloadEvent in
order to prevent unload will still work).
The initial interval for the periodic GC is randomized on a per-world
basis, in order to avoid all world being GCed at the same time -
minimizing potential lag spikes.
Skulls need their tile entity in order to create an item correctly when
broken unlike every other block. Instead of sprinkling special cases all
over the code just override dropNaturally for skulls to read from their
tile entity and make sure everything that wants to drop them calls this
method before removing the block. There is only one case where this wasn't
already true so we end up with much less special casing.
Skull blocks store their type in a tile entity and use their block data
as rotation. When breaking a block the block data is used for determining
what item to drop. Simply changing this to use the skull method for getting
their drop data is not enough because their tile entity is already gone.
Therefore we have to special case skulls to get the correct data _and_ get
that data before breaking the block.
This change was done to remove the internal sound names from the API.
Along with moving the internal names into CraftBukkit, a unit test was
added for any new sounds added in the API to assure they have a non-null
mapping.
After further testing it appears that while the original LongHashtable
has issues with object creation churn and is severly slower than even
java.util.HashMap in general case benchmarks it is in fact very efficient
for our use case.
With this in mind I wrote a replacement LongObjectHashMap modeled after
LongHashtable. Unlike the original implementation this one does not use
Entry objects for storage so does not have the same object creation churn.
It also uses a 2D array instead of a 3D one and does not use a cache as
benchmarking shows this is more efficient. The "bucket size" was chosen
based on benchmarking performance of the HashMap with contents that would
be plausible for a 200+ player server. This means it uses a little extra
memory for smaller servers but almost always uses less than the normal
java.util.HashMap.
To make up for the original LongHashtable being a poor choice for generic
datasets I added a mixer to the new implementation based on code from
MurmurHash. While this has no noticable effect positive or negative with
our normal use of chunk coordinates it makes the HashMap perform just as
well with nearly any kind of dataset.
After these changes ChunkProviderServer.isChunkLoaded() goes from using
20% CPU time while sampling to not even showing up after 45 minutes of
sampling due to the CPU usage being too low to be noticed.
This ArrayList duplicates part of the functionality of the much more
efficient chunk map so can be removed as the map can be used in the few
places this was needed.
- All StructureGrowEvent handling for these is in BlockSapling now, using a BlockChangeDelegate to collect the data.
- Moved StructureGrowDelegate into a separate class