Astral Plague - Class Project

Project Overview

Astral Plague: Zero is some of my best more comprehensive work. It displays both design and technical prowess, alongside team management and juggling deadlines. This project was made as part of my Digital Media and interaction class in Spring 2024 at Kennesaw State University. We had a teamsize of five people, Team lead (myself), Character Animator, Enemy Animator, Background Artist, and UI programmer. I was responsible for directing all the elements of our game and facilitating between the different teammates.

  • Authored a bespoke ability system in Unity 2D using C# and a custom gameplay state system allowing for improved enemy and gameplay design.

  • Directing a team of 5 developers, animators, and artists by outlining core game features, narrative, and technical requirements.

  • Championing the design of integral gameplay mechanics and gameplay style by implementing in-engine with C# and custom visual nodes.

  • Writing extensive narrative and story information by planning narrative beats, major locations, and characters.

Design Work

3D -> 2D Combat

Coming Soon!

Game Feel on a budget

Coming Soon!

Programming Work

Getting Technical: Custom Ability System Overview

During this project, I built a bespoke ability system inspired by Unreal Engine’s Gameplay Ability System (GAS). My two core goals were:

  • Improve my technical design skills by implementing a system from scratch

  • Gain design flexibility beyond hardcoded logic or rigid data bindings

Execution Flow

The ability system operates on a simple yet extensible linear flow:

  1. Input Triggers Ability
    Each input (e.g. LMB) maps to an ability defined in a combat class. For AI, inputs are simulated via Behavior Tree tasks.

  2. Validation Checks
    Before execution, the system validates actor state using enums—checking for conditions like “IsAlive” or “IsStunned.” If invalid, the ability is aborted or deferred (e.g. combo chains, interrupted casts).

  3. Ability Pre-Execution (Scriptable Object)
    Once validated, a function is called on the ScriptableObject (our DefaultAbility), handling any preliminary logic—e.g. hitbox setup or targeting.

  4. Combat Component Integration
    The ScriptableObject returns relevant data to the Combat Component, which handles game-specific execution like applying damage, launching projectiles, or calling utility functions.

  5. Finalization & Game Feel
    After resolving logic, we finalize execution with cosmetic feedback—playing sound effects, camera shake, or screen effects.

Ability Preview inside Unity

Execution Flow

All abilities inherit from a DefaultAbility ScriptableObject. It defines:

  • Collider shape and position

  • Damage type

  • Gameplay cues (e.g. VFX, SFX)

This approach keeps the system lightweight and extensible. Most abilities use this base class, but designers can subclass it to add custom logic. For example:

Auriel_WaveAttack inherits from DefaultAbility but adds a beam-style projectile at the end of the attack—demonstrating the extensibility of the system.

Auriel_WaveAttack inherits from DefaultAbility but adds a beam-style projectile at the end of the attack—demonstrating the extensibility of the system.

Auriel_WaveAttack inherits from DefaultAbility but adds a beam-style projectile at the end of the attack—demonstrating the extensibility of the system.

Animation is automatically handled by syncing animation names with ability names. This loosely couples animation and ability logic, enabling rapid iteration by simply renaming assets or abilities.

State Management System

One core reason I implemented a custom ability system was to support shared entity states—similar to Unreal’s Gameplay Tags. These states:

  • Gate ability execution and transitions

  • Drive behavior trees (e.g. IsAttacking → AttackNode)

  • Coordinate animations and FX timing

The system is centralized through an interface implemented across major components. States are stored in a root data asset and cascade downward, keeping the logic consistent and centralized. This made cross-system communication reliable: if multiple components disagreed, the root state was the final authority.

Think of it like a waterfall—state updates travel upstream to the root, then cascade downstream to children. This ensured cohesion across systems.

Think of it like a waterfall—state updates travel upstream to the root, then cascade downstream to children. This ensured cohesion across systems.

Why it mattered

This system allowed for cleaner design iteration, fewer bugs, and rapid scalability. I could implement a new ability in under 30 minutes with correct FX, animation, and logic—all through a modular, editable ScriptableObject.

Behavior Trees and Navmesh

I don’t like Unity. That probably isn’t a surprising opinion in 2024 and beyond, especially for anyone working with AI or advanced navigation systems. For all of Unity’s popularity in the 2D space, a surprising number of its features, like NavMesh, just don’t support 2D out of the box. Even more frustrating is the lack of a native behavior tree system, something Unreal offers in spades with both BTs and StateTree. But this project wasn’t about griping—it was about solving problems under constraints, so that’s what I did.

(Unity 6 has added a Behavior tree system, this did not exist at the time)

Behavior Trees

Combat was the heart of Astral Plague, which meant our enemies couldn’t just wander around like NPC filler. We needed real AI—aggressive, reactive, layered. From day one, I knew that writing behavior logic with basic scripts wasn’t going to scale. So I challenged myself: build a visual behavior tree system inside Unity.

I could’ve written it in raw C#, it would’ve been faster, and maybe even easier. However, I saw value in planning for the long term—scalability, clarity, and easier iteration as the complexity of enemy behavior increased. Still, with time being what it was, I eventually transitioned from my custom system to an open source solution that was far more robust. I made this choice after learning what I wanted to and gaining a stronger understanding of how BTs should be architected, and how to work with Unity's innate architecture and visual development toolsets.

Even with the open source solution, the work wasn’t plug-and-play. Many of the nodes weren’t compatible with the 2D constraints of our game, especially regarding movement and attack patterns. I ended up modifying the internals to better support 2D NavMesh logic and built out custom nodes to handle stagger windows, combo interruptions, and proximity awareness. Nothing fancy, but it worked. That said, my implementation of AI states—attacking, dying, chasing—was the weakest part. State transitions were sometimes fragile, resulting in AI stalling or becoming unresponsive. This exposed the brittleness of my architecture, and it’s something I’d refactor completely given more time.

Then there was the problem of navigation. Unity doesn’t support 2D NavMesh. There’s no checkbox or hidden API—it’s just not there. At first, I considered writing an A* system from scratch, but with the project clock ticking down, that wasn’t viable. Once again, I turned to open source.

What I found was functional but not built for sidescrollers—it was top-down only. So I hacked it. I created a custom navigation layer system that allowed for vertical separation—one layer for grounded enemies, another for flying ones. This involved layering walkable planes, rerouting raycasts, and modifying path logic to account for gravity and z-layer awareness. It wasn’t pretty. But it worked.

And more importantly, it let us design flying and ground-based enemies with fundamentally different behavior logic—all without manually scripting their every move. The biggest takeaway here? You don’t always need an elegant system—just one that works within the constraints and still leaves room to iterate.

Navmesh in 2d

Combat was the heart of Astral Plague, which meant our enemies couldn’t just wander around like NPC filler. We needed real AI—aggressive, reactive, layered. From day one, I knew that writing behavior logic with basic scripts wasn’t going to scale. So I challenged myself: build a visual behavior tree system inside Unity.

Yes, I could’ve written it in raw C#. Yes, it would’ve been faster. But I saw value in planning for the long term—scalability, clarity, and easier iteration as the complexity of enemy behavior increased. Still, with time being what it was, I eventually transitioned from my custom system to an open source solution that was far more robust. That choice wasn’t a cop-out—it was a pivot. I had already gained what I needed: a stronger understanding of how BTs should be architected, and where Unity’s limitations really are.

Even with the open source solution, the work wasn’t plug-and-play. Many of the nodes weren’t compatible with the 2D constraints of our game, especially regarding movement and attack patterns. I ended up modifying the internals to better support 2D NavMesh logic and built out custom nodes to handle stagger windows, combo interruptions, and proximity awareness. Nothing fancy, but it worked. That said, my implementation of AI states—attacking, dying, chasing—was the weakest part. State transitions were sometimes fragile, resulting in AI stalling or becoming unresponsive. This exposed the brittleness of my architecture, and it’s something I’d refactor completely given more time.

Behavior Trees

Navmesh in 2d

Then there was the problem of navigation. Unity doesn’t support 2D NavMesh. There’s no checkbox or hidden API—it’s just not there. At first, I considered writing an A* system from scratch, but with the project clock ticking down, that wasn’t viable. Once again, I turned to open source.

What I found was functional but not built for sidescrollers—it was top-down only. So I hacked it. I created a custom navigation layer system that allowed for vertical separation—one layer for grounded enemies, another for flying ones. This involved layering walkable planes, rerouting raycasts, and modifying path logic to account for gravity and z-layer awareness. It wasn’t pretty. But it worked.

And more importantly, it let us design flying and ground-based enemies with fundamentally different behavior logic—all without manually scripting their every move. The biggest takeaway here? You don’t always need an elegant system—just one that works within the constraints and still leaves room to iterate.

Mistakes

Enemy Variety

When we first committed to developing a Sekiro demake, we knew we wouldn’t have the bandwidth for a wide enemy roster. Our initial plan was minimal by design, just a boss encounter and a training dummy to test mechanics. This was a deliberate scoping decision, made with full awareness of our limited production time.

In terms of project management, it worked. We finished on time and delivered what we set out to build. But from a design standpoint, the result was a bit too lean. The player could breeze through the level without needing to truly engage with our core mechanics—particularly our central one: the parry system.

Outside of the boss, the only other real enemy we shipped was a floating eyeball that performed radial AoE attacks. It checked the “enemy” box, sure, but it failed to challenge the player meaningfully. Crucially, it didn’t require or even encourage parrying. Instead, it was something the player could simply wait out or ignore.

Things got worse when we created a ranged variant of this eyeball. Since its projectiles couldn’t be parried at all (only blocked), it bypassed the posture system entirely. That meant our two primary enemies, outside of the boss, almost completely ignored the mechanic that was supposed to define our combat.

So what went wrong?

Some of it was on me. I could’ve pushed harder for smarter design solutions. But it was also a resource issue, especially around animation. Our team had two very capable animators. One focused entirely on the player—covering movement, attacks, traversal, the works. The other was responsible for the boss and, time permitting, any additional enemies. That time never really materialized. Without bespoke animations, our enemy options were severely limited.

Looking back, we needed more than just animations—we needed better design from the ground up. Even within those constraints, there were things I could have done:

  • Rethink the ranged enemy: A simple deflection mechanic—letting players parry a projectile and send it back—would have instantly increased mechanical depth and reinforced the parry loop.

  • Reuse existing resources: We could have repurposed the player model with recolors or minor tweaks to create humanoid enemies that leveraged existing animations. Even basic AI behavior with familiar attack telegraphs would have been a step up.

  • Marketplace animations: Risky, but worth exploring. With careful curation, we may have been able to find fitting animations to bootstrap another enemy type.

Ultimately, our minimal enemy count and limited mechanical synergy undercut the combat’s potential. The boss fight held up, but everything in between felt empty. If I were to revisit the project, I’d push harder on design-side adaptations to make the most of the assets we did have—parryable variants, reflection mechanics, posture interaction, and model reuse. The lesson? Even with limited scope, the right constraints can fuel creativity—if you don’t overlook the design side of the equation.

Element Design

Part of the theme of this class was that we would pick a game and add a twist to it, and then our professor would introduce a second twist mid-semester. It’s a strong concept on paper, but given the already pressing time crunch, it made an already difficult production timeline that much tighter.

Our mid-semester twist was “Mega Man.” That’s all we were given—just the name. At first, it didn’t make a lot of sense. But eventually, we decided to take it literally: make the game more like Mega Man. Fortunately, we had a Mega Man modder and expert on our team, which gave us a solid reference point. This led to the addition of elemental forms—Fire, Ice, and Lightning—that the player could swap between on the fly. Each element offered a unique capability: Fire applied burn damage over time, Ice dealt posture damage and broke guards, and Lightning added a ranged burst at the end of a melee combo.

The implementation itself was relatively straightforward. Most of the logic tied into the Ability System I designed, and each element was simply a reconfigured and recolored variant of the base weapon (e.g., a blue sword for Ice). So in terms of pipelines and backend, things held up.

Where we started to run into problems was in gameplay impact and tuning. All three elements worked functionally, but Fire clearly overperformed. It offered the highest raw DPS and in most cases made the other elements irrelevant. I was able to bring it closer in line through balancing, but the issue remained—it was simply too efficient and required little decision-making to use optimally.

Another problem was activation. The original plan was for players to unlock these elements temporarily by successfully parrying a certain number of attacks (e.g., parry three times to activate Ice for 30 seconds). That system was never implemented due to time constraints, which meant elements could be toggled at will. On top of that, we had a fourth “element” by default: the vanilla weapon with no enhancements. Once the new forms were introduced, this default attack became obsolete and was never rebalanced to justify its continued use.

The simple solution would’ve been to finish implementing the original parry-based unlock system, which would’ve made the mechanic more deliberate and tied it more closely to the core combat loop. But that alone wouldn’t have solved everything. Of the three, Ice was clearly the most interesting and well-integrated. It worked in tandem with the posture system, creating a natural synergy without adding mechanical bloat. Fire and Lightning, by comparison, felt like sidegrades—strong in isolated scenarios but lacking cohesion with the rest of the game.

Looking back, we should’ve committed to just one element and polished the hell out of it. Ice, in particular, had a lot of potential for environmental and systemic integration. Imagine needing to freeze a platform to reach a new area or slow down a fast enemy to make your escape. These interactions wouldn’t just be cool—they’d require the player to think about how and when to engage with the system. By narrowing scope, we would’ve created a more focused, cohesive experience, rather than one where players cycle through three options without much strategy or feedback.

Thanks for reading, that's all for now!