Categories
Advanced and Experimental 3D computer Animation Techniques Project 2

Week 10 Submission and reflection of the final work

Game demo video:

in-game recording cut
gameplay demo recording

Game Map:

Player spawn point:

The player starts the game in the main hall. This area gives the player a moment to get oriented before the mission begins. And some initial dialogue is triggered to introduce the mission:

Dungeon:

In the dungeon, players will encounter enemies like zombie skeletons and need to collect key items for the mission. This area was once used for serpent blood experiments. The atmosphere is dark, requiring players to use a flashlight to explore:

Corridor:

The Serpent’s Sealing Place

Research:

Dark Souls: Death Loops and Ambiguous Narrative

Dark Souls has had the most profound impact on my overall game design, particularly in how it tells a story that has already faded away through the environment. The game does not have a traditional linear narrative. Players must piece together the past glory and current decline of the world through item descriptions, architectural styles, and enemy behavior.

This kind of decentralized storytelling made me realize the charm of ambiguous narrative. It doesn’t need to directly tell players what exactly happened, but instead uses the environment to imply.

During the creative process, I didn’t start with scene sketches. Instead, I began with fragments of emotions and gradually pieced together the overall atmosphere and experience. These emotional fragments acted like puzzle pieces first the feelings and moods, which then slowly shaped into concrete images and spaces in my mind. By working this way, I hope players can more directly feel the emotions conveyed by the environment and experience the shifts in atmosphere, rather than relying solely on specific visual details.

In my game, the contrasting design between the sacred relics and the underground prison is meant to convey narrative clues through the space itself. Players won’t directly know the cause and effect through text, but will instead feel the traces of a completely collapsed order through the scene.

Why would there be a dark dungeon hidden beside a solemn relic? And in the dungeon, not only are books scattered everywhere and a statue of a goddess standing silently in the corner, but also skeletons quietly kneeling, seemingly still praying. These visual symbols do not point to a single answer, but are like scattered keywords, waiting for players to piece them together — perhaps they will interpret it as a betrayal, or a failed salvation, or the collapse that comes when faith becomes extreme.

In such a structure, the player is no longer a passive recipient of narrative, but an active constructor of meaning. I care more about the questions, emotions, and imagination they generate during exploration, rather than whether they “correctly understand” a predetermined ending.

Dark Meadow: The Pact

The visual and spatial design of this game shows the remnants of the old world:dilapidated hospitals, flickering lights, a mysterious white witch, and monsters wandering the corridors.

As the truth gradually unfolds, we realize that these monsters are not malevolent intruders, but “waste products” left behind from soul bargains or failed experiments—they no longer belong to this world, yet they are unable to leave it.

This concept had a direct impact on my game. In my demo, the enemies in the dungeon are not traditional monsters, but by-products created after the descendants failed to control Serpent blood.

Even though the experiments were abandoned long ago and the researchers have vanished, these lifeforms remain trapped deep within the dungeon, continuing to exist as if forgotten by history—terrifying yet sorrowful.

At the same time, the protagonist’s sense of unfamiliarity upon awakening in Dark Meadow inspired the direction for my player character design. I want players to enter the game in the same state as that awakening: knowing nothing about the ruins, and having to piece together their own understanding through exploration.

This sense of the unknown and the ambiguous makes me believe that rather than a clear narrative, a space that can be read is the most moving way to tell a story.

Other Research and Production Plan can view in week 1 entry:

Concept reflection

The Fall of the Descendants: Power’s Self-Corrosion

The descendants are former guardians of the ruins (now gone away and abandoned this place) — once symbolized the inheritance of ancestral faith and responsibility.

However, over the course of long years, they were gradually eroded by power and desire. Their role as protectors turned into a monopolized privilege. They could easily obtain Serpent’s blood, which eventually became a tool for controlling others.

When traditional belief loses its genuine spiritual core, leaving behind only a structure of power that can be inherited and exploited—can it still represent justice? In such a system, bloodline is no longer a continuation of faith, but a token of identity, a legitimate means of domination.

I think this concept reflects certain realities. We may have all seen systems that appear to still function, maintaining sanctity and authority in form, but in truth have long strayed from their original meaning. The downfall of the guardians did not come from external enemies, but from internal hollowing and the erosion of values.

The Blood of the Serpent: A Forbidden Power

I chose the Serpent as a central symbol because it carries complex meanings across many cultures. It is often associated with both enlightenment and downfall. The serpent can grant wisdom, but it can also bring catastrophe. In the Bible, it tempts humanity to eat the fruit of knowledge. In Eastern traditions, it is frequently linked to dragons, immortality, and the cycle of rebirth.

In my game’s narrative, the blood of the Serpent represents a forbidden power, a form of knowledge that exceeds the natural boundaries of what humans are meant to possess. The descendants tried to harness it to transcend the limits of their physical bodies or prolong their lives. This reflects their belief that they could take the place of gods and rewrite the natural order.

This concept also reflects real-world anxieties around certain technologies such as gene editing and cloning. As we continue to push the limits of ethics and science, we often lose sight of the consequences. When the power to create is amplified by technology without a corresponding sense of humility or responsibility, the result may be not progress but chaos and madness.

Reflection:

This project gave me a deep insight into the many challenges faced by independent game developers. Although I am not a full-fledged indie developer, the whole process allowed me to learn and practice various stages of game development in a comprehensive way. It was also my first time using Blueprints in UE5 to program interactions for characters, enemies, and items, as well as my first experience creating UI and Widgets.

During development, I had to balance visual quality, gameplay design, in-game prompts, difficulty settings, and the overall atmosphere, which is quite challenging for one person. For example, UI design is a field worthy of deep study, and the same goes for level design. However, with limited time and energy, it’s difficult to cover everything thoroughly.

In addition, due to unexpected problems and technical constraints, I had to give up some design ideas and visual effects. To maintain acceptable game performance, I changed the shading model from SM6 to SM5 and reduced the lightmap resolution for some larger actors. Without these optimizations, the game would have suffered from serious lag. Although these changes improved performance, they also reduced lighting quality and overall visual appeal.

During the packaging stage, I encountered code errors and navigation issues. While enemy navigation worked fine in the UE5 editor, in the packaged build enemies sometimes got stuck on walls and failed to chase the player properly. This problem didn’t occur in the test maps.

This experience improved my practical skills. When solving problems, I learned to prioritize game performance first, and then find a balance with visuals, gameplay, and other aspects.

Used assets and resources in this project:

1.Dark fantasy greatsword pack

This image has an empty alt attribute; its file name is image-7-1024x631.png

Dark fantasy greatsword pack | Unreal Engine 5 + UNITY – Naked Singularity Studiohttps://www.youtube.com/watch?v=_xR6SHgfhPU

2.Morbid Pack

Morbid Pack Volume 1 – https://www.fab.com/listings/8b88ac2e-9b50-4381-91d1-46683a89178b

3. Serpent Model I Created in Last Semester — Collaboration Units
(Full process: modeling, UVs, texturing, rigging)

4. Gargoyle Model I Created in Last Semester — Collaboration Units
(Full process: modeling, UVs, texturing, rigging)

5.Texture repaint (arm, zombies, skeleton, gun)

In my discussion with Serra, she suggested that I try modifying the model textures myself. Assets like the gun and the arm are way too common and a lot of people use them, so I redrawed the textures myself.

Skeleton texture – white and black:

Gun:

Arm:

6. Game Tutorial:

The course is taught by the creator of the game “Deathly Stillness”, which is available on Steam. Through the course, I learned how to create character and enemy blueprints, The character and zombie asset packs (including skeletal meshes, animation assets, and audio assets) were provided as part of the course and used for practice.

https://www.bilibili.com/cheese/play/ss32685?csource=Share_copylink&from_spmid=333.873.selfDef.purchased_lecture&plat_id=362&share_from=lesson&share_medium=iphone&share_plat=ios&share_session_id=B94F4E1E-BB07-40B1-83E1-A53774E26305&share_source=COPY&share_tag=s_i&spmid=united.player-video-detail.0.0&timestamp=1747402290&unique_k=528infI

Categories
Advanced and Experimental 3D computer Animation Techniques Project 2

Week 9 Design UI,interact event and implement box-triggered dialogue events

  • UI Page Design
  • Interaction Design
  • Character Dialogue Script & Box Collision Trigger

This week, I focused on polishing the project by adding some UI interfaces, creating door interaction blueprint, and implementing a few dialogue triggers for the main character.

UI Page Design

0:00:00 – 0:51:00

Game Main Interface:


In the game’s main interface, I used a camera that continuously films the serpent and added some camera shake effects to enhance the atmosphere:

This method of creating the main menu allows for a real-time dynamic background. Since the flowers and fog in the scene are subtly animated, it provides a more immersive effect compared to using a static pre-rendered image as the background:

Widget blueprint:

Level Blueprint:

When the level begins, I create the main menu widget and use the Set View Target with Blend node to switch the camera view to a dedicated one positioned for the menu background. I also set the input mode to UI-only and lock player input, so the player can’t move or shoot while on the start menu screen. Finally, I use a Play Sound 2D node to start the game’s background music:

Since I set the Z key as the input to start the game, once the player presses it, the player controller is obtained and keyboard input is disabled.

Then, the start menu widget is removed. I also added a camera fade node to avoid clipping issues, since the camera would otherwise pass through walls during the view target with blend node. The fade lasts for 2 seconds to smoothly mask the transition.

After that, I use set view target with blend again to switch the camera to the gameplay view and set the input mode to game only:

In-Game Interface:

The in-game interface consists of the health bar, ammo count, and the menu panel:

In game effect:

The health and ammo displays are updated in real time by referencing the player’s current stats. When either value falls below a certain threshold, the corresponding UI element changes color to alert the player.

health bar blueprint
sum ammo blueprint
current ammo blueprint

In-game menu

I created an in-game menu where the player can view the control instructions, restart the game, continue, or quit the game.

In front of the starting hall, I added a box collision to trigger a key prompt showing “Press M to open the menu.”

The logic is similar to creating any widget: when the player enters the collision area, the widget is created and added to the viewport; when the player leaves, it is removed from the parent.

Mission result widget:

When the player opens the final door, the mission is considered successful, meaning they have successfully escaped, and the mission complete screen is displayed. If the player’s health reaches zero, the mission failed screen appears instead. In both cases, I use the same UI layout and simply use a select text node to determine which message to display.

In the player’s character blueprint, within the death system, I continue to use a create widget node after the player dies. Then I call a custom event from the mission result blueprint that displays the game result. In this case, I uncheck the if win boolean, since the player’s health reaching zero means the mission has failed. As a result, the select text node will display the mission failed message accordingly.

When the mission is successful, I added a box collision in the final door room. When the player enters it, the widget is displayed.

Character Dialogue Script & Box Collision Trigger

0:51:00 – 1:09:00

For the character’s voice lines, I used ChatGPT to help write the dialogue and Eleven Labs (https://elevenlabs.io/app/speech-synthesis/text-to-speech) to generate the voice. The protagonist is a seasoned mercenary who often complains about his boss during missions.

https://elevenlabs.io/app/speech-synthesis/text-to-speech

When designing the dialogue, I felt it was important to give the player some basic context right at the beginning, like why the protagonist is here. So at the start of the game, there’s a short conversation between the commander and the protagonist, explaining that this is a mission to investigate abnormal lifeform activity and retrieve experimental samples.

box trigger I used
its blueprint

Since I initially didn’t add trigger boxes to create widgets(just some triggers to play 2d sounds), most of the mission hints like finding the key downstairs or collecting statue fragments are delivered through the protagonist’s voice lines.

He complains about the job while also subtly guiding the player on what to do next. This dual-purpose dialogue helps keep the pacing natural while reinforcing the character’s personality.

Interaction blueprint:

1:09:00 – end

In this part, I mainly worked on the blueprint logic for two doors: the small door downstairs and the final main door.

The small door requires a key to open. I placed two keys in the level to ensure that players can still progress even if they miss one. The final door requires the player to collect three statue fragments to unlock it. In total, there are six statue fragments scattered throughout the level.

The blueprint is mainly divided into three parts. The first part uses a timeline, lerp, and set relative rotation nodes to animate the door opening.

The second part handles the logic for checking how many keys the player currently has versus how many are required.

The third part adds a box collision that triggers different widgets to prompt the player. For example, if the player lacks enough keys, it displays a message indicating that a key is needed. When the key requirement is met, it shows a prompt to press the F key to open the door.

door blueprint
door blueprint
door blueprint
key blueprint
Categories
Advanced and Experimental 3D computer Animation Techniques Project 2

Week 8 Continue Write gargoyle blueprints and fix bugs

This week, I focused on fixing the issues related to the gargoyle’s physic mimics after death, and continued working on the attack damage detection system for the gargoyle enemy.

Physical collision problem:

Another issue that took me quite a bit of time to deal with was the gargoyle’s physical collision. After enabling simulate physics, I noticed that the gargoyle’s body would twist and rotate unnaturally, and the collision response was inaccurate. This is because UE5’s default physics collision is automatically generated based on the skeleton, which often doesn’t work well, especially for complex models like this one. The calculated collision volumes frequently don’t match the visual appearance of the mesh.

To fix this, I had to manually adjust the collision shapes for each bone, making sure their forms matched the model more closely. Although this process was time-consuming, the results were much better. The corpse no longer clips into the ground or flails around unnaturally:

0:00 – 4:17 : edit physical collision

There was also another small but important detail: bullet impact force. Since bullets in my game travel fast and carry a lot of impulse, if the player kept shooting after the gargoyle died, the corpse would sometimes get launched or spin wildly into the air. It looked like the body just disappeared under fire:

To prevent that, I set the Mass value of the gargoyle’s physical asset very high. This way, once it enters the death state, it stays grounded and barely moves, even under heavy gunfire. It simply collapses and remains in place, without being knocked across the room by the player’s attacks.

Attack detection:

4:17 – end : write attack detect blueprint

For attack detection, I followed a similar approach to how I handled zombies. I created two Blueprint classes: one for Attack Start and another for Attack End.

These blueprints are used to get the socket locations of the bones at the beginning and end of an attack. Since the gargoyle sometimes attacks with its left hand and sometimes with its right, I couldn’t hardcode the detection logic in the character blueprint:

I added a sphere trace node, using the socket location as the center. This allows the detection area to be a spherical range around the attacking bone, and I can adjust the radius based on the actual attack reach. Once a hit is detected within that sphere, the damage is applied using the Apply Damage node.

In the gargoyle’s animation asset, I added a second notify track since the first track is used for attack sound effects.

On this second track, I inserted the previously created Blueprint classes for Attack Start and Attack End at the appropriate moments in the animation. Because attack damage detection happens every tick while it’s active, keeping it running constantly would be too performance-heavy. By placing these notifies only during the wind-up and impact portions of the attack animation, I can limit the calculation to just the moments when it’s actually needed.

Lastly, in the animation notify, I copied the bone names from the elbow to the tip of the middle finger to ensure accurate tracking during the attack.

Categories
Advanced and Experimental 3D computer Animation Techniques Project 2

Week 7 Animation Retargeting – prepare to write gargoyle blueprints

This two weeks, my main task was to complete the blueprint scripting for the gargoyle enemy. Before writing the blueprint, I first needed to finish the animation assets for the gargoyle. I created new folders to better manage them, which are: attack animations, hit animations, death animations, idle, and walk animations.

Below is a video showing my full workflow:

In this process, I first download suitable animation assets from the Mixamo platform and import them into UE5. Inside UE5, I use animation retargeting to transfer these motions onto my gargoyle asset.

Inside UE5, I use animation retargeting to transfer these motions onto my gargoyle asset.

However, I encountered a serious issue: the gargoyle’s neck bone gets distorted after being imported into UE5, even though this problem does not exist at all in Maya.

This distortion also affected my plan to implement the gargoyle’s flying chase behavior later on, and I eventually had to abandon it because the twisted neck bone made the animation unusable. I suspect this is due to some problematic in-between bones in the neck hierarchy.

As a result, if I apply the retargeted animation directly, the gargoyle’s head flies off unnaturally.

To fix this, I export the retargeted FBX from UE, adjust the neck joints by rotating them properly in Maya, bake the keyframes there, and then export the corrected FBX back into UE5. I also added some additional movement details to the wings and arms, and removed unnecessary keyframes to make the animation cleaner and more efficient.

This whole process was actually much more exhausting than writing the blueprint itself because I had to constantly export and import files between two different software. Also, since the gargoyle’s running animation looked rather awkward and lacked a sense of threat, I ultimately decided not to use the running animation asset in the final version of game:

Gargoyle Animation Assets

Write gargoyle blueprints

The blueprint for the gargoyle is almost the same as the one for the zombie, except for the different animation assets and animation blueprint. However, due to the different attack hit detection, I will rewrite its blueprint from scratch:

While debugging the gargoyle enemy’s death blueprint, I met a tricky problem. My original plan was to have the enemy play a death animation first, and then switch to simulate physics so the body could fall naturally. However, in practice, this caused major issues. Even after the animation finished playing, the gargoyle would still retain its previous AI logic. It continued executing its “AI Move To” behavior and kept rotating to face the player:

I tried many solutions, such as forcefully disabling the AI Controller, disconnecting the movement logic in the Blueprint, and even delaying the state transition. But none of them worked well. The gargoyle always ended up in a weird “half-dead” state where its AI was still running, which clearly wasn’t right.

In the end, I decided to abandon the death animation altogether. When the gargoyle’s health reaches zero, I immediately enable simulate physics and turn it into a static mesh. It then collapses naturally, and I set it to be destroyed after 10 seconds. Although this skips the animation transition, the enemy now falls quickly and smoothly, avoiding all those awkward leftover AI behaviors. In fact, the overall effect feels more natural this way.

This reminded me that many games actually handle enemy deaths like this. In several games I’ve played, enemies don’t bother with long death animations—they just drop instantly, with ragdoll physics handling the fall. Boss fights are a bit different, often featuring dedicated death animations, but even then the corpses usually disappear quickly, often accompanied by some kind of special effect.

With that comparison in mind, I think my current approach is a reasonable one. For regular enemies, keeping the combat flow smooth is more important than playing a full death animation. And since the gargoyle is just a basic dungeon enemy in this case, there’s no need to spend extra resources on its death presentation.

Even though I had to let go of my original plan, I’m actually quite happy with how it turned out. In game development, we often have to balance ideal designs with practical realities.

Categories
Advanced and Experimental 3D computer Animation Techniques Sessions with George

Week 10 Final submission and reflection

Body mechanics and facial acting animation show reel:

This week, I completed the final polish for my body mechanics and facial acting animation. I focused on refining the motion details and making subtle adjustments based on the feedback I received. In addition, I worked on setting up the lighting and did the final rendering to present the animation in its best possible quality.

Lighting part:

Three-point lighting

I used three-point lighting to light my character, rendered it in Maya, and then imported it into Nuke for some modifications and adjustments.

I used a warm key light to ensure that the main part of the character feels warm and has a sense of volume. I also added a cool fill light for the darker areas, so the shadows still have details, and the overall image has a nice contrast in color temperature, which prevents it from looking flat. To make the character stand out from the background, I placed a back light behind the character to outline the edges and make the character look more three-dimensional in the scene.

For the background, I didn’t just use a flat color or a simple texture. Instead, I considered that different animation moods require different background tones, so I adjust the background colors based on the emotion and story. In addition, I added a background light, which makes the center area of the background — right behind the character — slightly brighter than the surroundings. This naturally draws the audience’s attention to the character and also adds more depth to the image, so it doesn’t look too flat.

Post processing part:

Since I set the color space in Maya to Aces, it gives me a wider color gamut and more variations compared to the traditional srgb. However, most monitors can only display srgb colors, so I need to import the Aces exr files into Nuke and output them as srgb png files. In Nuke, I can also do some color grading to fine-tune the final look.

After getting the rendering result of the acting animation, I found that probably due to an issue with the model itself, the character’s eyes had some strange shapes, like small triangles. So I used roto in Nuke to mask them out and fix it.

Animation Reflection:

Looking back at this semester, I feel I’ve made real progress in understanding how to keep a character alive on screen even when they’re not moving much. One of the biggest things I learned is the importance of moving holds and copied pairs. At the start, I thought holding a pose was just about freezing the key frames for a few frames. But through practice, I realized that in 3D animation a still hold pose is don’t a good way cause everything looks stuck.

By learning to use copied pairs properly, I started to edit tiny adjustments during a hold, like a small leg press, a tiny weight shift, or a head tilt. It’s a simple trick but it makes the animation feel much more polished and believable.

However, I also found that adding movement to a very short hold is not as easy as just pushing keys around. If the timing isn’t planned well, those small movements can look shaky or unintentional. So this semester taught me that moving holds are not just a technical step; they force you to really understand timing and spacing at a deeper level.

Another area I improved on is planning the flow of actions. For example, my timing at the beginning often felt too evenly spread out or slightly off rhythm, sometimes too slow where it should be snappy. With feedback, I learned to think more in terms of slow-in, fast-out, slow-in, and to use anticipation and follow-through properly. It’s not just about making big poses but about how those poses connect smoothly.

For acting animation, I learnt not to stick too rigidly to the reference, especially when the character model has its own limitations. Like I struggled with the crossed arm pose at first because of the character’s chest is big so the arms will clip through the chest. George’s feedback helped me realize that it’s okay to adapt the pose to fit the model better. By letting the arms hang naturally, I avoided unnecessary problems and freed up more time to polish the facial animation, which often has more impact on the final performance.

Refining facial expressions also taught me a lot. Before, I actually didn’t pay much attention to how important the lines of the eyebrows and mouth shapes are for guiding the viewer’s gaze and strengthening the emotion. Now, I will pay more attention to making the eyebrows form a clean, continuous curve, and the corners of the mouth should have a clear direction and shape. Additionally, details like whether the teeth are shown, how much they are shown, and whether the angle is natural—these small details actually have a big impact on the naturalness of the expression.

I previously struggled with body movement during dialogue. I tended to animate the whole body shifting, but real people usually keep their lower body relatively stable. By focusing more on the shoulders and upper torso movements,

This semester, George also taught us the technique of using spline for blocking. It helped me spot unnatural transitions earlier and quickly block out the character’s body rotations in a easy way.

Categories
Advanced and Experimental 3D computer Animation Techniques Sessions with George

Week 9 edit spline – facial performance animation

This week, I received valuable feedback from George. He pointed out that the rhythm of my animation could be improved by refining the timing and using more moving holds. Following the same principles of motion I’ve been practicing, I focused on letting the character’s movement ease in and out — starting slow, speeding up, and then slowing down again. To achieve this, I placed more keyframes around the main poses and kept fewer in-betweens, so that the action flows clearly from pose to pose.

Eyebrow Offset

I need to add an offset to the eyebrow movement so that the eyebrows do not move exactly in sync with the eyes. Instead, they lag slightly behind the eye motion, creating a more natural and appealing secondary motion. This subtle delay helps convey more believable facial expressions and adds an organic feel to the character’s performance.

Focusing on the Nose Position

In this week’s feedback session, George pointed out that one of the key areas I should focus on is the position of the nose. Since the nose is the central element of the face, it serves as a good reference point for the overall facial movement. By observing the nose’s position, I can better understand the main arc of the head and face, and translate that into more dynamic motion.

George noted that some of my nose trajectories were too linear. To improve this, I’m trying to think of the nose movement more like a bouncing ball, like to give subtle curves and arcs instead of straight lines. This helps add life and fluidity to the facial animation and makes the performance feel more expressive and believable.

Animation Rhythm and Timing:

Following the same principles of motion that I’ve been practicing, I focused on refining the rhythm of the animation by adding moving holds. This means the character’s motion eases in and out — starting slow, accelerating, and then slowing down again. To achieve this, I placed more key frames around the main poses and fewer in the in-betweens. This pose-to-pose approach helps emphasize the clarity and impact of each major action.

While this method gives the animation better weight and more natural timing, I realized I still need to pay attention to how evenly I space the moving holds. Sometimes, I tend to hold a pose for too long, making the movement feel stiff instead of alive. Also, I noticed that my transitions between poses can sometimes lack subtle overlapping motion, which slightly reduces the fluidity.

Mouth Pose:

I noticed that the corners of the mouth in my animation are quite sharp, and the lower lip has an exaggerated, obvious curve. In reality, the lower lip usually stays closer to a gentle horizontal line or has a softer, rounder shape.

Because of this, the mouth shape in some of my poses looks slightly stylized and unnatural. To improve this, I plan to adjust the mouth poses to make the corners smoother and the lower lip more subtle and organic. By doing so, the facial expressions should feel more believable and better match the overall naturalistic look I’m aiming for.

After edited:

Categories
Advanced and Experimental 3D computer Animation Techniques Project 2

Week 6 Continue build the map

This week, I continued working on level construction, with a focus on building the dungeon area.

Working process:

  • 00:00 – 10:52 : Completed the construction of the second half of the dungeon.
  • 10:52 – 20:50 : make landscape and flower in the middle area
  • 20:51 – 30:25 : Built the bridge and the final gate.
  • 30:29 – 39:56 : Used physics-based drops to scatter debris and built the rocky walls.
  • 39:56 – 47:57 : Write the blueprint of small gate and key
  • 47:57 – end : Refined the space behind the corridor and the small door.

In the game world, the player starts in a grand hall filled with knight statues. This hall was once a sacred place dedicated to the worship of heroes and the goddess. Together, they sealed away a powerful being known as the Serpent.

Though they couldn’t destroy it completely, they managed to imprison it deep within this place. To guard the seal, the people built this ancient site and placed stone gargoyles as protectors.

However, over the years, the goddess and heroes vanished, and the memory of their purpose faded. One day, a descendant of the nobility discovered that the Serpent’s blood possessed miraculous powers—it could cure diseases, and even bring the dead back to life. Driven by greed, he secretly constructed a dungeon in a side chamber next to the main hall, using condemned prisoners and corpses for inhuman experiments in an attempt to harness this forbidden power.

When the player reaches the end of the hallway, they’ll discover an underground space. While designing this area, I wanted to create a sense of complete darkness, with no light sources at all. To progress, the player needs to turn on their flashlight. This creates a circular highlight in the scene, which not only guides the player’s focus but also helps enhance the overall sense of immersion.

When creating this scene, I used the previously mentioned Packed Level Actor method to combine the statue, vessels, gargoyle, and dragon-head sculpture into a single level instance. This area represents the place where those mad experimenters offered blood in their rituals.

Used plugin: Dash

https://www.polygonflow.io/

I use Dash to help me better organize assets. It makes the assets more visual. Although you can also use filters in folders, Dash feels a bit more convenient.

It also lets me use physics simulation brushes to scatter objects on the ground, and vine brushes to create vines on walls. But these tools can be a bit tricky to use.

It lets me use physics simulation brushes to scatter objects on the ground, and vine brushes to create vines on walls. But these tools can be a bit tricky to use.

I also used it to draw some vines, but later the actor became too large, which caused a heavy lighting load, so I deleted the vines.

However, when the project entered the later packaging stage, as the pressure of optimizing lighting and materials increased, I had to face a reality.

These vines brought a large lighting and performance overhead. For a game that needs to run smoothly and allow real-time player interaction, I had to delete them to ensure a smooth frame rate. The same applies to the point lights on the torches in the knight hall. I had to set them to static lighting because the knight statue actors are too large, and too many lights would bring a large overhead.

Compared to film production, this made me realize a core principle in game development. The balance between visuals and playable runs. I need to make choice and put limited resources where the player really needs to experience them.

Categories
Advanced and Experimental 3D computer Animation Techniques Project 2

Week 5 Build environment of the game map

Assets I used:

1.Dark fantasy greatsword pack:

Dark fantasy greatsword pack | Unreal Engine 5 + UNITY – Naked Singularity Studiohttps://www.youtube.com/watch?v=_xR6SHgfhPU

2. Morbid Pack:

Morbid Pack Volume 1 – https://www.fab.com/listings/8b88ac2e-9b50-4381-91d1-46683a89178b

My process of building environment:

I tried for a long time to create volumetric lighting to achieve the Tyndall beam effect, but I wasn’t successful. So I had to make a fake light beam myself:

I added cube models, stretched them, then duplicated and rotated them six times to form the light beam model.

I used two texture assets from UE5’s built-in engine content pack to create a light beam.

The effect I get:

Create packed level actor

During the asset setup stage, I followed this tutorial and learned a really useful method called “Create Packed Level Actor.” It’s a great way to combine many different assets—it can regroup various assets into one big, rich new asset and pack it into the level, so it can be moved around freely.

The “Group” function in UE is really hard to use, because when rotating or moving, the pivot points of the parts need to be manually re-aligned. Otherwise, after rotation, their positions will shift and break the original composition.

Method of create packed level actor – start from 15:08

I used this method to build almost all the complex assets in the game, such as staircases, the knight statue hall, some complex combined statues, tables, and the platform where the serpent is placed.

However, this method also has some downsides. When I try to place an actor inside a packed level actor, sometimes it stops working. Also, when complex actors are nested together, modifying them can cause some models to disappear. If I go into the original level where the model was made, all the parts are still there—but when I drag the actor into a new level, some of them are missing.

This cost me a lot of time when making the knight statue hall. I nested individual knight statues with a full row of statues and added blueprint components for the torches. The problem was, when I tried to modify the actor at the second level, going back to the first level wouldn’t save my changes—so I ended up losing the parts I just modified.

Also, in this setup, the blueprint components for the torches couldn’t display correctly. So what I did was place the torches separately instead.

7:44 – 12:38

When building the stairs, I also used the “Create Packed Level Actor” function. I created a stair module, so I can combine them to form multiple staircases.

I combined four of them to create a main staircase

When I first started building, I honestly didn’t know where to begin. The assets were like thin sheets of paper, so I had to manually align and connect the stair sections. Since there were gaps between the bottom of the assets and the stairs, I had to use single-sided walls to close those openings.

However, because these assets are quite modular and open-ended, there are many possibilities when it comes to rebuilding and reshaping the staircases.

Categories
Advanced and Experimental 3D computer Animation Techniques Project 2

Week 4 Write enemy blue print and Repaint the texture

This week, I had a lot on my plate — I devoted almost all of my time to this project. Trying out Blueprints and building the game was incredibly exciting for me, and along the way I encountered many challenges.

Here’s what I worked on this week:

1. Created the blueprint for the enemy zombies

2.Repaint the texture

In my last discussion with Serra, she suggested that I try modifying the model textures myself. Assets like the gun and the arm are way too common and a lot of people use them, so I’m planning to redraw the textures myself. Also, I noticed that the zombie and skeleton materials aren’t very high quality, so I redrew those too and tested them in Unreal Engine.

When I was painting the zombie textures, I wanted them to look more gruesome and fleshy. Although it’s not exactly bright, positive, or uplifting, it fits the overall game background.

In this game, the goddess and heroes sealed the Serpent but couldn’t kill it. Many years passed, and both the goddess and heroes disappeared without a trace. Some nobles discovered that the Serpent’s blood could cause mutations, and even had powers like resurrection and life extension. So, they built a dungeon near ancient ruins to conduct experiments. These zombies and skeletons are the tragic results of those experiments.

Skeleton texture – white and black:

Gun:

When I was painting the gun and arm skins, I actually really enjoyed it. As a player myself, I love buying skins in games, so having the chance to create them this time was exciting. I wanted to make the skins a bit more stylized and fun. Since the overall tone of the game isn’t very bright or lighthearted, I wanted the main character to look a bit more positive.

Arm:

When I was painting the arm, I added some paint-like graffiti, a fabric texture for the glove, and visible veins on the arm.

3. Build environment

Assets I used:

1.Dark fantasy greatsword pack:

Dark fantasy greatsword pack | Unreal Engine 5 + UNITY – Naked Singularity Studiohttps://www.youtube.com/watch?v=_xR6SHgfhPU

2. Morbid Pack:

Morbid Pack Volume 1 – https://www.fab.com/listings/8b88ac2e-9b50-4381-91d1-46683a89178b

My process of building environment:

I tried for a long time to create volumetric lighting to achieve the Tyndall beam effect, but I wasn’t successful. So I had to make a fake light beam myself:

I added cube models, stretched them, then duplicated and rotated them six times to form the light beam model.

I used two texture assets from UE5’s built-in engine content pack to create a light beam.

The effect I get:

Create packed level actor

During the asset setup stage, I followed this tutorial and learned a really useful method called “Create Packed Level Actor.” It’s a great way to combine many different assets—it can regroup various assets into one big, rich new asset and pack it into the level, so it can be moved around freely.

The “Group” function in UE is really hard to use, because when rotating or moving, the pivot points of the parts need to be manually re-aligned. Otherwise, after rotation, their positions will shift and break the original composition.

Method of create packed level actor – start from 15:08

I used this method to build almost all the complex assets in the game, such as staircases, the knight statue hall, some complex combined statues, tables, and the platform where the serpent is placed.

However, this method also has some downsides. When I try to place an actor inside a packed level actor, sometimes it stops working. Also, when complex actors are nested together, modifying them can cause some models to disappear. If I go into the original level where the model was made, all the parts are still there—but when I drag the actor into a new level, some of them are missing.

This cost me a lot of time when making the knight statue hall. I nested individual knight statues with a full row of statues and added blueprint components for the torches. The problem was, when I tried to modify the actor at the second level, going back to the first level wouldn’t save my changes—so I ended up losing the parts I just modified.

Also, in this setup, the blueprint components for the torches couldn’t display correctly. So what I did was place the torches separately instead.

7:44 – 12:38

When building the stairs, I also used the “Create Packed Level Actor” function. I created a stair module, so I can combine them to form multiple staircases.

I combined four of them to create a main staircase

When I first started building, I honestly didn’t know where to begin. The assets were like thin sheets of paper, so I had to manually align and connect the stair sections. Since there were gaps between the bottom of the assets and the stairs, I had to use single-sided walls to close those openings.

However, because these assets are quite modular and open-ended, there are many possibilities when it comes to rebuilding and reshaping the staircases.

Used plugin: Dash

I use Dash to help me better organize assets. It makes the assets more visual. Although you can also use filters in folders, Dash feels a bit more convenient.

It also lets me use physics simulation brushes to scatter objects on the ground, and vine brushes to create vines on walls. But these tools can be a bit tricky to use.

https://www.polygonflow.io/

Categories
Advanced and Experimental 3D computer Animation Techniques Sessions with George

Week 8 spline – facial performance animation

In this week, I convert my blocking to spline:

When I switched the animation to spline mode, I ran into some issues. I noticed that the character’s movements were too large and too fast, so I had to remove some keyframes. Otherwise, the character looked hyperactive—shaking back and forth while talking.

I did my best to adjust those key frames, but that led to a new problem: after reducing the rotation in the waist, the poses of the head and neck changed unexpectedly. Because of this, I had to go back and tweak each pose again to make sure everything stays accurate and flows smoothly.

I realized that I included too many expression changes during the blocking phase, which made the character’s face look overly busy. I’m hoping to get some feedback from George in this week’s review session to help me address the facial animation issues. When all the facial features are moving at once, it becomes hard to find a clear visual focal point. I think I should reduce some of the key frames for the eyebrows and mouth to create a stronger, more readable expression.