Over the last phase of development, I moved deeper into the structural side of the simulation by replacing brute force spatial searching with an adaptive octree system. This shift was necessary because performance began to degrade heavily as more creatures were added, and too many core systems relied on neighborhood checks, including perception, behavior, signals, and the future schooling logic. The octree now acts as the spatial backbone of the simulation, dynamically subdividing dense regions of the tank while leaving sparse regions coarse. This keeps the simulation architecture flexible while setting up the performance improvements needed for larger populations and more complex behaviors.


Debug Visualization of the Dynamically Sized Octree

    At the same time, I expanded the simulation’s debug tooling significantly. I added octree visualization in Unity, including node rendering, depth-based color layers, layer toggles, and selected-node inspection. I also added support for highlighting the world object IDs stored inside a selected node, which makes it much easier to understand how objects are being distributed spatially and how the simulation is indexing them. This has already made debugging much more transparent and has started turning the octree from a hidden system into something I can actively reason about and tune.

Debug Visualization Showing the ability to highlight (in blue) the objects in a specific octree node by node ID

    Another major improvement in this phase was the disturbance system. Disturbances are still treated as real world objects, but now they are beginning to behave and appear more dynamically. Instead of remaining static placeholders, they are being updated over time as expanding and fading pulses, which better represents how an environmental event propagates through the space. This is important both for simulation clarity and for exhibition design, since disturbance visuals will eventually become part of the secondary system-display views surrounding the main tank.

View of the first pass of the Species Profile Scriptable Object

    I also completed the first pass of species profile integration using ScriptableObjects. Creature tuning values such as movement, signal generation, and behavior thresholds are now editor-authored rather than hardcoded. This makes the system much easier to iterate on and opens the door for multi-species spawning, different ecological roles, and later biological growth systems. Species tags are also being established now so that future diet and predation logic can be built on a flexible classification system rather than hardcoded species relationships.

View of the simulation and dynamic signal visualizations.

    At this stage, the simulation is functionally reactive, but the behavior still needs refinement. Investigate behavior is currently too broad and causes some fish, especially the grunts, to overreact to signals in ways that interfere with believable schooling. The next phase of development will focus on improving behavioral interpretation so that disturbances can be classified differently depending on emitter identity, species relationship, and context. That will allow the same raw disturbance object to be treated as ordinary schooling motion, a novel event worth investigating, or a threat worth fleeing.
    The upcoming priorities are now much clearer. The octree is in place, the visual debugging tools are growing, and species profiles are working. From here, the next major goals are refining investigate behavior, implementing flee, adding dynamic schooling rather than always-on schooling, and introducing age, size, and growth systems that affect movement, signaling, and ecology. This phase has been less about surface polish and more about building the structural systems that will make the final simulation feel alive rather than scripted.
Exploring Display Views, Multi-Screen Output, and 3D Presentation
    Alongside the simulation architecture itself, I have also been exploring how the project can be displayed spatially in an exhibition setting. Since the capstone is built around the idea of a real aquarium tank functioning as the boundary of a digital ecosystem, presentation is not just a technical detail but part of the concept. Because of that, I have been testing how Unity can output to multiple monitors or displays at once, how different cameras could be assigned to different views, and how multiple system perspectives might be shown simultaneously rather than collapsed into a single screen.
    This exploration has included thinking through how the main simulation view, alternate camera views, and secondary debug or sensory views might each occupy their own display space. Rather than treating those as simple interface overlays, I am considering them as part of the overall installation language of the piece. That means looking at how a primary “contained ecosystem” view could coexist with surrounding displays that show internal logic, system states, perception layers, or alternate visual perspectives. In other words, I am not only building the simulation itself, but also investigating how to stage and distribute that simulation in a way that feels intentional in physical space.
    I have also been exploring different ways the simulation could be projected or shown within a three-dimensional environment. This includes thinking through how projection, screens, and multi-view setups might relate to the physical tank structure and the surrounding installation. At this stage, I am still evaluating what is most feasible and effective, but this line of exploration is important because the final piece is meant to exist as both a simulation and an exhibition object. The display strategy will influence not only readability and technical setup, but also how viewers understand the relationship between containment, observation, and hidden system activity.

You may also like

Back to Top