- GDC, Zeemotes, and Kinect combat on Gamasutra
- Anthony’s new company, Kermdinger Studios, is wildly successful-esque
- Detection, feedback, rhythm, and queuing
- First playtest with new combat gestures
- Global Game Jam!
- Meet the Power Claw
- Action in Motion is Back in Action!
- Playtesting and Polish
- Work Since Halves – Prototypes 5 & 6, effects, animation variations, more attacks!
- Preparing for Halves!
Category Archive: Uncategorized
Subcategories: No categories
The team just got back from GDC last week – talk about exhausting! This week is our spring break, so we’re taking it easy. By crunching for next week’s major half-semester presentation! That’s right, crunch is how we relax now. This has been a key adaptation.
A minor point of hilarity: GDC attendees this year found free Zeemotes in their goodie bags. Totally caught us by surprise (at the time we chose to adopt it, Zeemote appeared to be quite frankly dead, which pleasantly turns out to have been a hasty judgement), but it’s nice to know that there’ll be a reasonable quantity of people who’ll be able to try out Action in Motion with its intended control peripheral. Just remember, we liked them before they were cool!
We’d also like to point you in the direction of an excellent article by Nick Adams from Blitz Game Studios on Kinect combat. Long-time Action in Motion fans will recognize some familiar mantras (“Exaggerating the player’s input can be used to create a more heroic experience” – why yes, I suppose we do agree with that!), but even for us it was definitely worth a read. We’re glad that more games are coming to market that focus on motion-control combat, and can’t wait to check out their title!
Anthony Palma, our endearingly bro-ish former producer and programmer who was part of the founding team, is off to an amazing start with his new indie dev company/project, Kermdinger Studios! They just won a major CMU venture pitch competition!
Are we surprised? No.
Proud? Heck yes.
Way to go, Anthony and team!
This post will be a little more technical from a design perspective, but it’s the conceptual gristle we’re chewing on as we continue to iterate and tune the combat system, and I for one find it delicious (stew it for long enough and it turns into gravy).
Looking at how players play a game like Devil May Cry or Bayonetta, there’s a clear trial and error phase where they figure out what buttons do what. ”Ok, they make the dude/nun attack – simple enough”, says the player. But this isn’t Double Dragon – as the player experiments more, the complexity ramps quickly as attacks lead into each other, and it becomes apparent that the timing between button-presses is going to result in vastly different effects. Combos are a key element of hack-and-slash games, but what’s cut-and-dry with buttons becomes a messy web of broken assumptions when you add motion control the equation.
Input and button mashing
The player’s initial assumption is that the game should always be ready to accept their input. In hack-and-slash games this illusion is broken pretty early because attacks for the most part can’t be interrupted with new attacks. This is the origin of the bad habit called “button mashing” – a button-mashing player is a frustrated player who doesn’t know when the game is ready for his input, so just keeps hammering the buttons constantly. The motion control equivalent of this is even worse – the technical term is “wild flailing”. A flailing player is confused, frustrated, and frankly a danger to himself and those around him (early last semester we had the bruises to show for it). Realistically, flailing around not knowing what’s going on is fun in its own way, but it’s not the sort of fun we’re going for.
Three main mechanisms come to mind that games can use to guide the player to mastery over input: feedback, queuing (which unfortunately has big caveats with motion control), and rhythm-emphasizing mechanics.
How does a player even know when they should press a button? One answer is that they learn naturally over time, which relates more to rhythm, but a big component is that the game does everything it can to indicate to the player when it’s ready for input. Character animation that carefully juggles cultivating a sense of power/finesse/badassery vs. functional clarity, in combination with VFX and audio cues, will hopefully leave the player with no doubt as to when the character is ready to be told what to do.
Queuing (caveat emp-motion control-tor)
Let’s say the player is late in pressing a button (or in our case swiping their arm). That’s totally fine – they may have missed out on the opportunity to create a flowing combo, but their input will still be rewarded with the avatar attacking. But what if they’re early? Even hitting the attack button a moment too early can result in failure to attack and a clear perception of unresponsiveness in poorly designed systems. Think about when your computer is lagging and you hit a key on your keyboard while the system is “thinking” – did the key go through? Maybe I should push it again. Maybe a few more times. This is what we called out above: bbuttttton mashhhhhinng.
But on a keyboard when you hit a letter it WILL eventually show up, right? That’s what’s called queuing, and it’s not just for word processors – action games do it too. In Ninja Gaiden Black if I hit “X X Y Y Y” all at once in a flurry, Ryu Hayabusa will finish executing all the separate attack comprising the Blade of the Dragon’s Tail combo, even though I entered them all before the first attack even finished. The study of how queuing systems work in different games deserves (many) article(s) to itself, but this is the basic concept.
Unfortunately, queuing like that, while effective in a button-based context, is exactly the opposite of what we want for our expressive motion control experience. By queuing the player’s movements, the game is essentially having the avatar lag behind the player. This is fine conceptually to some extent with button control, but in motion control, depending on the severity, it can be perceived as sluggishness, input lag, or even completely random unintended and unwanted attacking. Even more fundamentally, the ultimate goal with our concept of motion control is that the avatar feels as closely connected to the player’s body as possible, and queuing encourages the player to move faster than the avatar can keep up.
We expect to implement a limited form of queuing to avoid some blatant false negatives, but overall our real goal is to create mechanics that encourage and aid the player in pacing themselves in sync with the avatar.
Music games aren’t the only games with rhythm. I’m not talking pacing here. Even with excellent feedback, if the moment-to-moment combat doesn’t have a rhythm and order that resonates with the player on a deep level, it will be difficult to learn and create muscle memory for. This doesn’t just diminish the player’s feeling of mastery and create frustration – the rhythm of combat can be one of the big fundamental pleasures of a well-done combat system. Many types of games get some of their pleasure factor from this kind of moment-to-moment rhythm – how much more fun is a side-scrolling space shooter if the waves of enemies have a spatial/time pattern to them rather than just a smattering of scattered spaceships flying at you?
How boring would Super Mario World be if all the jumps and enemies were just chaotic noise (or for that matter, spaced in perfectly patterns)? What if Megaman enemies didn’t fire at regular intervals?
Allow me to rave for a couple sentences about Batman: Arkham City. This is a game that could easily have been a button-masher – its attack controls are dead simple, and attacking is almost always the right thing to do. But you see, that doesn’t account for the fact that Rocksteady Studios is friggin’ smart and made rhythm the core of their combat. They’ve designed a bevy of mechanics that encourage the player to be patient and sparing with slamming the attack button – most notably, hitting the attack button only once per hit results in a hefty bonus to combo score. Cleverly minimal queuing and fluid avatar mobility both make it very simple to get into an edifying and effective rhythm of attacking. The tuning of enemy placement, enemy attack timing, and player attack responsiveness all encourages the player to attack neither too early (locking herself into an irreversible command that could result in taking a hit) nor too late (again possibly getting hit or losing an opportunity).
Ultimately rhythm tuning is our best hope of indirectly guiding the player to attacking with timing that we can build fun combat out of. We’ll need to invest some serious time into adjusting the timing properties of each phase of each of our attacks, as well as the depth and timing of how they blend or transition between one another. Excellent feedback is a prerequisite to all of this – if Patrick wasn’t our animator I think I would give up and become a plumber right about now =).
Never too early to playtest! This weekend we finished up our first pass on the new combat system, and we had about 8 kind folks in to spend some time swinging their arms, having some fun, and reminding us why motion control combat is hard to get right.
The core of our combat system revolves around 10 distinct but intuitive attack types that can be chained together to increase their power. These are the left/right/up/down/jab attacks which players can perform per arm. Left and right are “standard” attacks – fast for the blade in the right hand, slower for the left-handed power claw. Up, down, and jab each have more specific uses, which again diverge between the weapons, with the claw in general being slower and with greater damage/stun/knockback, and the blade being more quick-hitting and far-reaching.
Actually being able to use these attacks functionally is a couple weeks off, with our new gesture detection system still in its infancy, but we had some extremely promising results with players quickly grokking the different actions they could take and experimenting with flowing them together.
That’s right – we’re busy, but never too busy to screw up our sleep schedules with a glorious 48 hours of game-developing mayhem! For this year’s Global Game Jam, the 4 of us teamed up with 2 other friends to make Ka-Chunk, a spinning 2-player head-to-head physics/block puzzler with borderline baffling mechanics and beautiful art.
Why were the mechanics so difficult for new players? Lack of playtesting, of course (the root of all evil)! This was an “integrate final systems and assets 30 minutes before ship time” type of game jam for us, largely because we massively underestimated the technical/design complexity of block manipulation game-feel. Live and learn (and iterate)!
One of the design challenges we’ve set out to tackle this semester is looking at a heavier, more powerful weapon – if you know hack-and-slash games, you know that also translates to a slower weapon, both for gameplay balance and to create a visceral sense of weight. The fast blade weapon from our first semester was designed to make our lives easier by being able to keep up with the players’ motions, regardless of how frenetic they get. The challenge of a slower weapon is to teach the player to wait, to move at the pace the weapon can support and “feel” its heft. With a solid understanding of player mindset and Kinect affordance under our belt, we feel ready to give it a try and introduce our off-hand weapon, the Power Claw (or whatever we end up calling it).
The blade remains in the player’s right hand, with the claw occupying the left. One of our goals with the distinct weapons in each of the two hands is to demonstrate that Kinect gestures can present a control scheme that gives instant access to a wider variety of attacks than any normal console controller could provide. Since the properties of claw attacks are wildly different from blade attacks, and each directional attack has its own purpose, players can immediately execute ~8-10 distinct attacks without even getting into combos. We’ll address our take on combos (which is also specifically catered to motion control, as you may imagine, since all of our designs are) in a later post.
Hack-and-slash fans may remember that the genre has a long legacy of super-powered slow-ish fist/claw weapons. Devil May Cry’s Gilgamesh, Bayonetta’s Durga (fire version), Space Marine’s Power Fist, Fallout (ok, not really a hack-and-slash), any boxing game… now we’re reaching. Which is another ability of the power claw! Given the behind-the-back camera mandated by our strict regimen of motion control, keeping the player from needing to turn much is a key goal, and the power claw’s attacks focus on crowd-control, stun, and battlefield mobility.
We’re back, ladies and gentlemen! It’s the second day of the new semester, and we’re beyond excited to be following up last semester’s successful tech demo with a deeper design exploration into motion control combat mechanics, finishing with a full playable mission!
We’ll keep you updated as we move forward, but first, some introductions are in order. Anthony Palma, our wonderful producer and one of our programmers, has set off on the grand journey of starting a company focused on combining his passions of comedy and games. We’re sad to see him go, but it’s a great opportunity for him, and we wish him all the best! Pei Hong Tan, our concept and texture artist, has stepped up to become our producer, leveraging his excellent organizational skills, selfless demeanor, and devastating martial arts prowess.
This also gave us the opportunity to welcome programmer Chenyang Xia to the team! His background in Unity, Kinect, 3d math and AI programming, as well as his natural design intuition, made him the natural pick, and we’re happy to have him aboard.
Thanks for all the interest that we’ve received – if we’re lucky and make smart enough choices, we hope to have a really special butt-kicking robot experience to give to the community by the end of the semester. Stay tuned!
After finishing up prototypes 5 and 6, the team moved directly into the final few weeks of the semester where they focused on playtesting the product, iterating based on feedback, and polishing the final demo as a whole. Informal playtests have been conducted all semester when smaller features were implemented, but since all prototypes were finished and integrated the team put together a full day of formal playtesting to see how the full experience felt to various individuals. The team received a lot of great feedback, but the two most overwhelming and impactful comments were along the lines of:
- “I can’t target any enemies and I’m having a hard time hitting them. I feel like I’m just slashing at the air in an arbitrary direction.”
- “When I hit an enemy, I pass through them and have to turn around. Could you maybe hit them back a little so they always stay in front of you?”
In response to these two pieces of feedback, we implemented a targeting system and improved enemy hit reactions so they actually stumble back and stay in front of you for a more manageable combo. The targeting system was something we had not planned to implement, but in response to the overwhelming amount of feedback regarding the lack of targeting we put in a left arm targeting system. When the player winds up for a slash with their right arm, they can use their left arm to point in the direction they would like to turn and aim, and when they get relatively centered on an enemy a “target” reticle pops up around that enemy and allows the player to slash directly at that enemy.
Beyond the playtests, the team has also been busy polishing all existing elements of the final demo. Dismemberment has been added to the game to enhance the gratification of killing an enemy beyond simple ragdolling, and with this the sync kill now has scripted dismemberments that fit the slashes exactly and make the sync kill event sequence more rewarding.
Blocking has also been implemented so the player can choose to block enemy attacks when necessary. This is performed by a simple gesture (hands above head, in front of face), and this causes an energy shield to appear in front of the player and block any incoming damage from enemies.
Finally, new textures have been added to the hero and enemy Riggs that make them stand out against each other even more and help them telegraph their movements better. These textures are included in the above screenshots.
The team will be making a final push towards soft opening and finals to get in any last changes and polish elements before the semester is over.
The team has been very hard at work since halves presentations.
On the programming side, prototypes 5 and 6 are finished and the programmers are now working on polishing the existing material and adding in more content as planned. Prototype 5 consisted of more blended slashes being implemented, and Adam wrote a Unity Editor script to let him mirror animations to speed up the implementation process. More slashes are being added to the hero character to give the player variety in the way they attack, and the blending and math work continues. For prototype 6, Anthony implemented several environmental and special effects as well as all of the UI layer and the big payoff moment in the game, the Sync Kill. The Sync Kill is a quick time event (QTE) sequence in which the camera switches to a cinematic mode and the player basically interacts with a cutscene when prompted to trigger the next part of the cutscene. When completed correctly, the end result is a very fluid set of three kills that the player feels like they performed.
The artists have been diligently working on adding more animations, developing the UI elements, and working on effects for the world. Patrick has been busy creating particle effects for things such as wall smashes and hits on the enemy, and he has also been tuning the enemy animations and adding more variation to their behavior. Pei has been designing and painting all of the UI elements, including the Sync Kill gesture guides, the health and charge meters, and the combo counter.
Below are some examples of effects and things that have been implemented since halves!
1/2 semester presentations are today for Action in Motion and we’ve been incredibly busy preparing our demos for this presentation. We’ll be showing two demos today as well as a good bit of our finished artwork.
On the art side:
- Enemy Character is modeled, rigged, textured and animated
- Hero Character is modeled, rigged, textured, animated and has a cloak with a cloth sim and weapon trails on his wrist blades
- Environment is modeled, textured and has next-gen deferred rendering in place
On the programming side:
- Prototype 3 (Artificial Intelligence and Zeemote joystick integration) has been completed and AI now chases and swarms enemy while looking for an open position from which to strike
- Prototype 4 (Per-bone blended attacks and combos) has also been completed and an in-to-out combo attack is possible with the right arm
The completion of Prototype 4 is very significant because the real meat of this project was getting this blending technology working between 1:1 captured Kinect input and pre-authored animations. We are very happy with the result thus far and will continue to refine our algorithms to make the blending as smooth as possible.
And now for some artwork and screenshots!