I need to revise the first pose, as the line of the character’s shoulders is really important—it helps create a strong silhouette and adds contrast to the overall shape. To improve the pose, I plan to tilt the shoulders more, which should enhance the visual appeal and make the character’s stance feel more dynamic.
Another thing to pay attention to is the character’s teeth. Sometimes, it’s better to either show the teeth clearly or not at all—anything in between can look unintentional or awkward. When the teeth are visible, it’s important to be mindful of their position and angle. Adjusting the rotation can help make the teeth look more natural and properly integrated into the expression.
The shape of the mouth is also really important. Typically, the corners of the mouth have sharper edges, and the overall mouth shape tends to form a trapezoid—either slanting upward or downward. It’s crucial to adjust the direction of the mouth corners based on the character’s emotion, as it greatly affects the clarity and believability of the expression.
Here, I wanted to make the character feel more alive, so I added a hold in the animation.
I rotated his nose and chin, creating the effect that they haven’t quite caught up yet and are lingering in the previous position. I think this little pause may help convey a more believable and nuanced movement.
This week, I completed the facial blocking, and during class, I got some valuable feedback from George. His comments helped me see a few areas where I could push the expressions further and make the performance more readable.
At first, I was trying to closely match the reference pose, so I went with a crossed-arm position. However, since the character model has a larger chest, this caused noticeable clipping between the arms and the chest when the arms were tightly crossed. George pointed out that I didn’t need to make things harder for myself from the start and suggested a more relaxed approach. Instead of forcing the reference pose, he recommended letting the arms hang naturally. This not only avoids the model’s limitations but also frees up more time and energy for me to focus on the facial animation, which is a more important part of the performance.
The second issue I ran into was with the eyebrows—they weren’t forming a continuous line, but were instead separated, with a visible gap in the middle. This breaks the visual flow across the character’s face, making the expression feel less connected. Ideally, the eyebrows should form a clear guiding line that draws attention and supports the emotion. The mouth can then serve as a contrasting line in the opposite direction, creating a sense of visual balance and contrast in the facial expression.
Another issue is that the inhale before the character sighs isn’t very noticeable. To make the breathing action clearer, I could exaggerate the movement of the jaw a bit more. A slightly larger jaw drop would help sell the sense of the character taking in a breath, making the sigh feel more natural and expressive.
I encountered some major issues with the character’s body movement in my animation. The motion was too exaggerated, especially in the lower body. I had animated the character’s lower half shifting during dialogue, but in both the reference and real-life observations, people tend to keep their lower body relatively still while speaking, with most of the rotation happening in the upper body. So, I decided to remove the lower body movement entirely.
When re-keying the character’s turning animation, George showed us how to use spline mode even during the blocking phase. I found this technique really helpful—switching between stepped and spline made it much easier to spot overly large movements or unnatural transitions early on.
This week, I refined the character’s anticipation movement before she jumps. She now holds a moving pose for nearly three frames, creating a subtle moving hold. During this hold, her legs press down slightly to avoid the pose looking frozen or static.
Since this moment is very brief, I aimed to introduce just a hint of motion in her legs. If the movement was too exaggerated, it would feel unnatural or distracting. So I kept the motion subtle, just enough to give it life without breaking the flow of animation.
Following George’s feedback, he pointed out that my character’s legs were pressing down too much during the anticipation pose—so much so that they even made contact with the mat. He suggested reducing the range of this motion to keep it more grounded and believable.
The goal is to avoid movements that a real human body couldn’t realistically perform. Taking his advice, I adjusted the leg motion to be more subtle and natural, maintaining the energy of the anticipation without breaking the realism of the animation.
Prepare facial expression animation:
I found a line on the website George recommended. The context is that Legolas says to an orc who’s about to be killed: “I would not antagonize her.”https://www.moviesoundclips.net/sound.php?id=296 It means: “If it were me, I wouldn’t provoke her.” (She’s too powerful—provoking her wouldn’t be a wise choice.)
However, the original scene doesn’t carry much emotional variation—it feels like it stays within a single emotional tone throughout. So I wanted to imagine a new scenario to explore a broader range of emotional expression for my animation.
I set a scene where the protagonist hears others talking about “her,” and he falls into regret, guilt, and sadness. If he hadn’t been so impulsive back then, maybe she wouldn’t have ended up like this (maybe she died or is seriously injured and unconscious).
Then the protagonist says to those people: “I would not antagonize her.” There’s some anger toward himself, and also sadness, guilt, and uneasiness. He doesn’t know how to face the others’ eyes.
I recorded some videos and tried really hard to recreate that emotion, but they all looked kind of strange. Then George suggested that when I record, I shouldn’t look directly into the camera lens. Instead, I should look somewhere else, because the character isn’t talking to the camera—they’re talking to someone who’s standing in a different position.
So I re-recorded the video and picked one take. I chose two main poses: at the beginning, the character is standing and facing away. When he hears the conversation, he turns and looks in that direction, then says the line.
This week, I completed the following tasks in the character blueprint:
Set up footstep sounds
Added mouse input for aiming down sights
Created a transition to zoom the head during aiming
Adjusted walking speed while aiming
Added depth of field effect when aiming
Set footstep sounds specifically for aiming movement
Created the weapon blueprint
Wrote a custom event for weapon firing
Added a weapon state machine
Added weapon sound effects, bullet effects, and reload effects
Implemented bullet hit effects
Created a function to count bullets
Built the reload blueprint
Added natural bullet spread
Added recoil effect when firing
Added bullet glow effects
This week, my main focus was still on blueprint scripting. Once the character and enemy blueprints are done, I can just drag them into the map and use them directly. Although there are many follow-up steps afterward, the blueprints are definitely the top priority.
After finishing them, I’ll move on to reworking some model textures, building the scenes, creating the UI and main menu, and finally packaging the game.
Last week’s recording wasn’t very clear, and Windows Game Bar can’t capture multiple windows well.
Especially when I opened the blueprint editor, it kept recording the main window—pretty awkward!
This week, I found a recording software called OBS Studio, and I’ve discovered it works really well.
Set up footstep sounds:
I selected the audio files and created a single cue. Then, I added an audio modulator to unify the pitch and volume across these sounds, while also introducing random variations in pitch and volume. This way, when played back, the audio feels much more dynamic and natural:
Then, I went back to the animation asset and added a “Notify” — “Play Sound” — selecting the sound cue I just created.
Added mouse input for aiming down sights
Similar to before, I added mouse input, set it up as IA_focus (for aiming function), and linked it inside the character blueprint. This lets me control the focus with the mouse smoothly.
I used a timer to make the aiming zoom effect transition more smoothly. This approach will also come in handy later for toggling the flashlight and handling other action transitions.
Adjusted walking speed while aiming:
In the speed calculation function, I added a condition that if the character is aiming, the selected float value becomes 150(150 for the speed of walk with aiming).
Then I add blueprints for the weapon, cause there are some functions like reload ammo and ammo calculation, I need to split these from character blueprint, otherwise, it will be mass.
Then, in the weapon blueprint, attach (or spawn) the weapon to the character’s arm.
In the weapon blueprint, I set up the following event icons: fire_weapon, change bullets, and recoil.
In fire_weapon, I configured three states: start firing, firing, and stop firing. Then I used play animation montage to connect the animation assets to each of these states.
The rest of the operations are all shown in my screen recording:
This week, I mainly worked on setting up the character blueprint, including:
Character capsule setup and camera setup
Character state machine setup (idle, walk, run)
Implemented camera rotation control using mouse movement
Added shift key input to trigger the running state and linked it to the state machine, enabling communication between the character blueprint and the animation blueprint
Added weapon drag effect (weapon swings slightly caused by camera movement)
Added character detail lighting (I wanted to create a warm light to simulate a body-attached light source, combined with a cold light for contrast)
Effect testing
Before getting started, I will provide all the assets I used and the link to the tutorial I followed. Since the tutorial I found is from the Chinese platform Bilibili, I won’t be able to share a YouTube link for the video resource.
3. Serpent Model I Created in Last Semester — Collaboration Units (Full process: modeling, UVs, texturing, rigging)
4. Gargoyle Model I Created in Last Semester — Collaboration Units (Full process: modeling, UVs, texturing, rigging)
5. Tutorial:
The course is taught by the creator of the game “Deathly Stillness”, which is available on Steam. Through the course, I learned how to create character and enemy blueprints, The character and zombie asset packs (including skeletal meshes, animation assets, and audio assets) were provided as part of the course and used for practice.
At the beginning, I set up the Animation Blueprint, Character Blueprint, and Blend Space for the character.
When I started setting up my character, the first step was to create a Character Blueprint. Inside the blueprint, I added the character mesh and carefully adjusted its position so it sat correctly within the capsule collider. This ensures that the character’s mesh and collision boundaries are properly aligned, which is really important for smooth movement and interaction.
I applied the same process for setting up the camera. I attached the camera to the character blueprint and positioned it in a way that works well with the gameplay perspective I want to achieve.
These might seem like basic steps, but getting them right early on saves a lot of trouble later in the pipeline—especially when starting to work with the animation blueprint and player controls.
Since the screenshots were taken later in the process, they include some updates like the arm texture and a few event icon features I added along the way.
Character state machine setup (idle, walk, run):
Inside the Animation Blueprint, I set up the State Machine, including Idle, Walk, and Run states.
Idle
Move
In the Blend Space, I blended the Idle, Walk, and Run animations, and set up the corresponding movement speeds.
For example, I set the walking speed to 300 and running speed to 750. Since my scene is quite large, I needed the character to move a bit faster to fit the scene.
Implemented camera rotation control using mouse movement:
I also changed the bool input to Axis 2D, since the mouse controls the camera rotation based on X and Y movement on the screen plane. Using a bool would only allow simple on/off states, while Axis 2D lets me read continuous input values, which is more suitable for smooth camera control.
I used the Input Mapping Context to organize all the input settings and define the input devices, making the control setup more clean and modular.
I added a Look event in the event graph to call IA_look:
Added shift key input to trigger the running state and linked it to the state machine, enabling communication between the character blueprint and the animation blueprint:
In IA_run, I used a bool because it’s a simple on/off check—whether the key is pressed or not.
Similarly, I set up the Shift key input within the Input Mapping Context.
I added an if-run boolean check in the running graph. Then, I created a speed calculation function that uses this boolean—if the character is running, it sets the speed to 750, otherwise to 300.
Added character detail lighting (I wanted to create a warm light to simulate a body-attached light source, combined with a cold light for contrast):
This was my first attempt at adding a warm light attached to the character, so I added a point light inside the capsule.
Later during testing, I adjusted the light settings and created a toggleable flashlight-like feature for illumination.
IA_open_lightOpen light functionI added this feature in the later stage, so I recorded it using the current version of the project files.
In this shot, George suggested that I redistribute the timing of each action, so the overall animation follows a slow-in, fast-out, slow-in principle.
Currently, my timing is more like slow-slow-fast-slow. Before the character stretches her leg toward the body, I added a preparation pose to build up energy, but George suggested removing it because it feels a bit awkward.
He also recommended speeding up the actions after the character lands and as she stands up, as well as making the following step forward quicker.
So I made some adjustments, but I noticed there are still some issues. For example, when the character is preparing to jump, I had her hold the pose completely still for three frames, but this made the movement feel a bit stiff. If I try to add a slight movement during those three frames, it ends up looking twitchy because the hold is so short.
This week, I received feedback from George, made some changes to the key poses, and deepened my understanding of the animation.
It’s clear that the character’s foot poses need significant adjustments. As the end point of the limbs, the feet tend to have a slight delay at the beginning and during the movement — they don’t move exactly in sync with the legs or on the same level.
Moving Hold means that when a character is holding a pose, they shouldn’t be completely still — there needs to be some slight movement. Even something as subtle as breathing, a small head tilt, or a gentle body shift can make the character feel more alive. If a character stays perfectly frozen for a few frames, it immediately gives off a “mannequin” vibe, especially in 3D animation where complete stillness feels unnatural. Compared to 2D, where we can use line wiggles or stylized shaking to fake subtle movement, in 3D we really have to animate those micro-movements by hand.
My personal understanding is: don’t let your character die on screen. Even just a few frames of subtle “fake movement” can add tension and realism to the shot.
Copied Pairs
Copied Pairs are a specific technique used to create moving holds. The process is actually pretty straightforward — you take your key pose and duplicate it, then move the duplicated keys a few frames forward on the timeline. This creates a pair of identical keyframes that hold the pose for a bit.
At first it might seem like a lazy shortcut, but in practice, you can slightly adjust the in-between to add a soft transition — like the character shifting weight, breathing, or gently swaying. It creates a subtle sense of motion within a hold.
I see this as a way to refine your blocking — beyond just having key poses and breakdowns, you’re adding mini-transitions that give your animation more depth and rhythm.
This is my attempt to moving holds and copy pairs:
I added a pause to the pose where the character opens her arms — I felt like she was building up energy for the next sequence of movements, kind of like a gymnast preparing for a routine. Then I copied and pasted the keyframes and extended their duration. In the spline phase, I adjusted the curves to give her a slight sense of motion, so it feels like she’s subtly shifting, not completely still.
BREAKDOWNS & Arcs
During blocking, it’s important to pay attention to the movement of the hands, limbs, and the COG — making sure their motion follows arcs instead of straight lines.
George suggest to make the animation shorter as there are too many poses, and it last too long, so I make some changes of my plan. After the character stand up, she will move forward for one step and stop there:
Reference video:
Workflow checklist:
When I am animating, it’s easy to feel overwhelmed by the sheer number of steps between your initial idea and a polished final shot. This week, teacher introduce his workflow checklist to us, and it is really helpful.
Planning
Thumbnails: Quick sketches to explore poses, ideas, and storytelling to show key poses. Focus on silhouette, emotion, and staging. No need for detail — clarity is the goal.
References
Journal workflow
Layout
Staging
Composition
Story
Blocking
Stepped key poses
Fundamental approach
Easy to edit
Inner monologue timing tool
Arcs
Weight check
Compare thumbnails
Watch at speed
Splining
Work in small sections
Inbetween: adding secondary/key breakdowns for smoother transitions.
Inbetweens are the frames that bridge the gap between key poses. They smooth out transitions, add weight and timing nuance, and define the rhythm of a movement. It can be manipulated in graph editor, adjusted curves, or added breakdown keys.
Arcs
Polish
New mindset: reviewing your animation with fresh eyes.
Non-performance texture
Fix tiny things like toe splays on blinks — like breath cycles, toe splay, or tiny eye movements. These often-forgotten details are what truly bring a character to life and separate good animation from great.
This week, teacher introduced some experimental opportunities we can explore this semester, so I’m thinking that I might take this chance to push myself and create an FPS shooting game.
Recently, the second season of The Last of Us has been airing, and I really love its art style. The decaying cities are covered by plants, and people embark on a survival journey.
What sets this work apart from other zombie-themed works is that it more realistically reflects a cruel and heartless world. For example, under the rampant spread of cordyceps fungus, the government not only fails to provide shelter to the people but instead takes those who are uninfected in surrounding towns under the pretext of sending them to safety zones, only to kill and cremate them in the wilderness. The reason is that the higher-ups believe that if these people die, they won’t be infected in the future, which would reduce their workload.
After the outbreak, other survivors desperately scavenge for resources in order to live. Some even use lies to lure others in, only to kill them and steal their supplies when they least expect it. In that world, humans seem to have become even more terrifying than the zombies.
Inspired by this, I want to make a zombie FPS game. However, I need to think about how to make it and what the protagonist’s purpose in the game would be.
So, I’ve been researching the art design of The Last of Us to get some inspiration:
We can see that most of the city scenes are open and spacious, while enclosed areas usually feature a focal point that draws the player’s attention, along with strong lighting contrasts to guide them in the right direction. Since I’ve never tried designing game levels before, I don’t really have a clear idea of where to start.
I’ve built a game level in Unreal Engine where the player can move back and forth. The layout is shaped like a “U”. The player spawns underneath a collapsed overpass and needs to find the correct path by navigating through the wreckage and abandoned cars. Along the way, zombies will appear to challenge the player.
In terms of gameplay expectations, the player can see their destination—an area filled with tall buildings—in the distance from the overpass. This visual cue encourages the player to explore and figure out how to reach that location. When the player discovers a way to climb the collapsed structure and get onto the overpass, they gain the opportunity to observe their surroundings from a higher vantage point.
The first area is designed like an isolated island, so the player needs to traverse vehicles and the overpass to reach the second section. I want the player to do more than just stand in one place shooting zombies—I want them to actively explore the city, using collapsed structures and debris to discover new paths and move from one location to the next.
The second area the player reach is partially submerged in water, so they have to move forward by stepping on sunken cars. If the player falls into the water, they will die.
I referenced this picture:
Then, while building the level, I imported Unreal Engine’s first-person shooter template to test it. Through testing, I discovered that there are some issues with the scene.
For example, the player needs a strong motivation to explore the map and receive positive feedback—but figuring out how to design that feedback loop is quite challenging.
The second issue is that the scene is too open and expansive, which results in endless skylines in the distance. If I were to build out all the surrounding buildings, it would require a massive amount of work.
A massive map would make the entire gameplay flow more complex. So I think I need to find an alternative approach, even though I’m quite reluctant to let go of this level.
Find Plan 2:
How can a zombie shooting game remain engaging when the scene is smaller in scale? How can I make the decaying environment more interesting?
I was reminded of Escape Room, where the protagonist is tricked by a company—thinking they were just joining a regular escape room game, only to realize that failure means death. So I started wondering if I could incorporate something similar into my game.
For example, the protagonist could also be participating in what they believe is a live-action immersive experience, but it turns out they’ve been deceived and thrown into a zombie-filled environment. They must outsmart the zombies and gather materials to repair the safehouse door lock in order to escape. Meanwhile, everything is being livestreamed by the company for profit. When the protagonist defeats a zombie, viewer reactions and comments would pop up in the bottom left corner of the screen, showing the audience’s responses in real time.
However, although this idea is quite diverse, it would be difficult to implement. For example, how to write the blueprint for delivering items(like find, pick tool to fix the safe door), and how to design a real-time feedback system in the bottom left corner similar to a live stream. I searched online for tutorials but couldn’t find much to reference, so I need alternative solutions.
Find Plan 3:
I thought I could continue my game design based on some tasks I completed last semester. In our group project last semester, we designed a dungeon short film, but due to some factors, the entire scene turned out quite dark, and my model didn’t look great in the scene. So, I thought I could design my own dungeon game, using the gargoyle as one of the enemies.
I tried using Mixamo’s animation library and motion mapping to implement actions like attack, chase, idle, knockdown, and hit reactions. However, I still need to adjust the animation blending and manually keyframe the wing animations:
This way, I can apply the model assets I previously created to design a game, which I think will be quite meaningful.
So, I plan to create a first-person, single-player game set in a dungeon. The protagonist is a member of a mysterious organization, sent to investigate supernatural occurrences and deal with the roaming monsters. The story’s opening is somewhat similar to the beginning of Resident Evil, where the protagonist goes to a strange village or eerie mansion to investigate and ends up killing the leader inside.
Although the game might not have deep spiritual themes or messages, I want to learn how to write blueprints and the process of creating game enemies, and I believe I will gain a lot from it.
Art style
In terms of art style, I suddenly thought of a game I played back in my primary school: Dark Meadow: The Pact,which was developed by Phosphor Games Studio based in Chicago, was built using the Unreal Engine 3 and was first released on the iOS platform (iPhone and iPad) on October 5, 2011.
The player wakes up in an abandoned hospital, having lost all memories of the past.
The only “guidance” comes from a man speaking through a loudspeaker. He claims he was once trapped here as well and tells you that the only way to escape this nightmare is to kill a being known as The Witch.
As the player battles monsters in the hallways, gains experience, and ascends to higher floors, the story unfolds—eventually revealing that The Witch is, in fact, the player’s own daughter.
Game Plan:
So, I wrote down all my game ideas to make sure I won’t forget them later. I’m planning to create two maps: the first one is a forest map on the surface, and the second is a dungeon map. I hope I’ll have enough time to finish both of them: