Week 7-8 : The final touches

This is polish week and almost all of what we have logged as artifacts is now within the game itself. What this boils down to is that at the moment no new systems are added, but  tweaked, updated or made to shine just a bit more (sometimes literally).

What then could there be to discuss? Quite a bit, as those tweaks tend to be what takes a game from being acceptable to being either great or awful.

For instance this week we chose to address the sluggish nature of some of our player movement in a more direct way than before: Changing the actual game-speed to run at 20 percent faster than before.  More specifically, the tick-rate or update-rate of the game http://www.nbnco.com.au/blog/entertainment/what-is-tick-rate-and-what-does-it-do.html

As our game is a network game, this is quite a different change than just altering the amount of updates on the server side. The client needed to send updates at a somewhat higher rate than before as well, so as to account for lost packets when deciding on how a player makes their next moves.

After the actual programming was completed,  testing began to see if players felt the game was more or less responsive. In general the change was seen as a positive one, but a slight issue cropped up with animations, as they too were locked into tick-rate (a possibility only because of the small amount of packets being sent between updates). This should never be done for any game with real-time changes during gameplay or with more than a handful of players.

The issue was that the animations would play a bit too fast and felt “jittery” to players who barely had a chance to see the effects. Luckily our old friend State Machines came along to help, or in other words, the Unity Animator. Changing up the animations speed itself meant the attack could be tweaked to move at a few percents slower pace without impacting the effect on other models as they collided with one-another, especially during attacks. While this essentially “hid” the fact that attack effects are not tied directly into animations, it still looked close enough at the angle seen by our cameras.

 

Week 7-8 : The final touches

Week 6 : Beta and the re-design of UI systems

The pre-beta playtest has come and gone, lending a number of reviews with information we were mostly already aware of. However, this was still a critical event as this confirms the need for re-designing the game in some key areas, including the very system we use to move our players and let them take actions. This blog post will thus detail the design work done before the feature freeze to come, rather than the producer side of handling the beta process itself. 

At the inception of our movement system, players were required to commit to taking certain actions at the cost of action points.  Once they had completed a set of moves, they were asked to confirm the same, then end their turn. This meant that in essence players who had no issues with deciding on their actions were still left with unfinished turns, as they either missed the extra confirmation button or would forget they needed to commit. The reason this system existed in the first place was as a placeholder for future development, but at the core of the issue was the fact that players could take very different actions, both moving and attacking out of order, if they so desired.

While this gave players more freedom in terms of strategy, that freedom was still limited by the fact that we were not giving them all that many points to do anything beyond a short set of moves or a quick burst of attacks. A solution to this could have been to afford players more points, but that would affect not only game balance but lengthen the planning stage, as more actions would require more time to think about what to do. This would have been fine in a game that was not multiplayer-based,  or focused on achieving a certain lower playtime, but not for what our game was aiming for.

So instead, I and the lead developer adopted a system akin to the one employed in the new X-Com. Here, players would be able to move 4 squares in any direction, or decide to move 8 without a chance to employ an attack. No longer can players attack more than once, but must instead use positioning to gain an advantage for future rounds to take down enemies with greater health. This also meant that players no longer got “stuck” near throngs of enemies without recourse, but could instead run away from their plight (possibly behind the other players, lending another source of paranoia). 

At the same time, we removed the need to ”confirm” a turn, saving each choice as it was made so that if a player got stuck pondering what move to take next and their timer ran out they would not lose their intended “progress”. This cut down time played by 5 minutes and produced less frustration among playtesters.

Client UI current

Week 6 : Beta and the re-design of UI systems

Week 5 First Animation (ever)

This week saw me taking my first steps into animation programming, resolving how to code to move a character according to input and the speed it was traveling.

Unity’s animation solution is a new type of challenge, but not entirely. It is mainly built up from State Machines https://www.techopedia.com/definition/16447/state-machine , something which a programmer might be more familiar with than an artist.  What these boil down to are devices or programs that produce a certain type of output depending on input, in this case calls from the Unity engine such as triggers, set values and Boolean variables. These start pre-made animations that artists have created beforehand which can be played at variable speed and merged into other animations via Blend trees  https://docs.unity3d.com/Manual/class-BlendTree.html . These state machines are named Animation Controllers in Unity, an example of which can be seen below.

AnimationController

Arrows are drawn between each animation, indicating what animations can transition into which. These are given the aforementioned triggers or requirements,  unless you want avatars to move into another animation at the end of a play-through of another without delay or prompting.

Here we come to my first assignment and the work I needed to complete: Making our character walk towards a target tile and animate that walk. Setting up the state machine itself was as easy as creating an animation controller via Unity’s automated process, then creating two sets of animation possibilities: The Idle and Walking animation. At this point it did not need to check if a player was moving at higher speeds or not, so all that the controller had to check was if the player was moving at all.

This however became a slight problem based on the fact that the player was often told to move based on lerping , or in other words moving not directly to a point but in increments based on how many ticks a process had accrued. A walking animation would therefore look jerky and “jumpy” if only polled from that data, with the character pivoting on its axis to move to new areas, should it be given a command to move in anything but a straight line.

Deltapos

The solution was to not use the lerped movement, but the magnitude of it, that is determining if a character was truly moving instead of simply being adjusted to a point. The rest was a simple if/else check to set the attached animator to run or not, since high-speed movement as mentioned was not requested at that point. The result was better, but not perfect. We would eventually need to put in a delay on tickrate, to ensure the animations played to completion as they should. (GIF of this incoming at a later date)

 

Week 5 First Animation (ever)

Week 4: Actual UI implementation, Alpha Presentation

The Alpha has come and gone , its presentation over. Feedback mainly centered around difficulty in determining which player is where and doing what, indicating our placeholder art needs to be replaced ASAP. But that would not make a very good blogpost, so here instead the project UI and where it derives its statistics are obtained. Remember that all art here is placeholder at the moment, meant only to represent what each player should see, not what they WILL see.

The UI itself would not make for a very interesting post either: The main health bars of a player on the game client would be placed in a Unity Canvas, with a masked bar moving from left to right across the screen to indicate a change in health. The bar’s position is determined based on screen size and its length on the amount of HP, to reflect characters with greater health as appropriate.  What is interesting is how the entities in a multiplayer game are created, their data obtained and sent to the controller of the bars themselves.

Entity Blog Code

As seen in the code above,  each entity is defined with certain characteristics that need to be instantly accessible: If it is dead or alive, controlled by a player, what its maximum action points and health are and if it is affected by any statistic effects. Beyond the struct of EntityData, which provides temporary values, these co-incidentally are all the values we need to depict on the screen at the start of the game, sans Initiative for movement which is chosen after the player is created. Both players and enemies have their data pre-defined in json files, loaded on creation of each scene as they are not meant to change between sessions.

And here is an early revision of the game UI , sans current player values.

ClientUI.png

Not much to look at, as mentioned. The Entity Data itself is located on the server, not the client however. Therefore that data needs to be updated via the network,  which contains a list of entities , both players and not, as well as data on what playerID active characters have. All health and actions are altered on the server, not the client, so these numbers are forwarded through server updates when applicable.

Before that can happen, the UI itself is instantiated, then updated as needed. If the playerID given at connection and creation of the player entity is the same as the one being updated the client recognizes this and either sets the bars of the player to their correct numbers, or creates the UI if it does not as of yet.

 

PlayerUI Updates.png

This data is not the only instance being accessed of course, given that this is a multiplayer game, but this represents our solution for obtaining player data for a single entity.

Next week’s post should see information on how the Unity Animator tree works and early iterations of how our group got it running.

 

Week 4: Actual UI implementation, Alpha Presentation

Week 1 and 2 of Big Game Project: Production and Design

These two weeks saw the start of a project that is to span 8 weeks of development, where I will be taking the role of Producer along with part-time Designer and Programmer. The game being made is an isometric 3D-game, tactical and turn-based. Secret agendas are a main mechanic, where-in players will be given secret objectives to complete that include betraying other players and killing them before reaching the end game. In short, the Mechanics involve Moving and Attacking enemies and allies, the dynamics of which would introduce planning where and when to attack to support or divert the group, leading to Paranoia as to if they can rely on player information as our aesthetic.

First of all came the development of our Project Plan ( https://drive.google.com/open?id=1F39-gTY6bFOfjBnSBnTLX6IpsU3q6ZPePITMo1iKRyI ), Trello Scrum Document (https://trello.com/b/KHPsWZYo ) and Design Document ( https://drive.google.com/open?id=0B_-RqKRH1s5yakJWNk4zWkNtT0E ), where I was responsible for the main writing of all three.

Our programmers meanwhile developed our first prototypes for game movement, server and client connections and our main input mode, so as to allow us to test the game as early as possible. To prepare for this I organized several testing sessions for mechanics that were to be included in the alpha version of the game, including initiative order for characters on the field, separating enemy and player turns as well as player endurance and difficulty.

Iterative testing was utilized to produce each specific set of content, in this case mechanics and how they are introduced. A paper prototype (Image for this is missing unfortunately) was utilized before the main game prototype was completed, consisting of a single room with base characters with similar abilities to the ones imagined for the final game.  Iterations were tested by group members during the first week and people outside the group in the second to see if the implementations would be easy to understand.

 

 

(Image linked from Gamasutra, Making Better Games Through Iteration)

First to be tested was Initiative Order, as earlier testing had already produced attack and movement designs. The initial test was for set initiative values between 1-5 for each player and enemy, with an average of 2.5 out of 5 for each enemy to give them a predictable pattern of movement. The effect however was games that ended in the favor of the player most turns played as they would nearly always go first. The second solution was to raise the enemy initiative by a point, to put the players on the defensive in the first few turns. However this resulted in an increase in difficulty that was insurmountable when playing against a concerted effort by the player representing the AI of the game. It was too difficult for players to communicate properly what their intentions were and how to gather together to defeat enemies with this disadvantage.

Due to this, a mechanic was introduced that let players “bid” initiative cards, from between 1-5 as before, splitting the player and enemy turns to make the initiative value influence the dynamic between players more rather than be against just the enemy. This meant that with proper planning and co-operation players had the ability to beat most foes as long as their communication was honest, which meant objectives that pushed for betrayal needed to be looked out for.

While this seemed to solve our problems, it created new ones in the form of people choosing the same initiative and resolving who goes first in that case. Since it has not been tested on a screen as of yet, it may also introduce a concept that is difficult to grasp or takes up too much time or space when played instead of the automatic solution of set values from before.

This testing proceeded with single changes between iterations for player health, the larger campaign the game is intended for with several rooms as well as Character Class abilities yet to be implemented. Each alteration was written down after evaluation to give an explanation for each change that each group member could partake in.

Next week should partially move away from testing and move towards Programming and Producer solutions.

Tag : 5SD037, BGP

Week 1 and 2 of Big Game Project: Production and Design

Week 3 of Big Game Project: Early UI and what to show the player.

Beyond more production and design work, migration of systems from the prototype to an early pre-alpha build began.  Since systems were no longer being tested by internal testers, the game required an UI on both the server and client screens displaying statistics like player health, Initiative number and which attacks are selected. The transfer lead to the use of actual equipment to appear on the GGC floor where the UI appeared differently and would require different solutions.

Blog Post UI-LESS

Above is one of the earliest Server images with actual placeholder art the group I was part of produced. It lacks almost all UI elements except for what we were testing at that very moment, namely connectivity. Players only needed to know if they were connected or not to show if the server would keep a stable connection and to test the new hardware our clients had similar designs.

As the game was populated by player and enemy entities and they were allowed to commit to actions, decisions had to be made about what information to show and when.  Not only because users require consistency and ease of use (https://www.interaction-design.org/literature/article/user-interface-design-guidelines-10-rules-of-thumb) but because our game was focused on betrayal and paranoia and hence as designers of the game, we did not wish to give away too much about player choices towards other players. Our first decision regarding this came in regards to initiative.

In the first draft, each user could see other users choices as they happened, listing their initiative order as it was altered. However, players responded with irritation at not being able to hide their actions through such free information: If they could instead remove that ability from other players in some way, they were free to tell users they were doing one thing at one initiative step, but do another in the real player space once they committed to an action.  This change was implemented and partially inspired by the Diplomacy game, a more text and speech-focused game played as much outside the game itself as within it.

However, initiative order was still a useful tool to explain to players just what players did what and so the order was kept, except only updated post-turns. That is, after a turn was completed, the initiative order was displayed on the server screen to give a summary that players could scan to give them an indication of what sort of initiative they had left to choose and to determine if a user’s proposed actions were true to their words. At this stage, most UI elements on the Client side were related to movement, attacks and statistics the player could affect directly, whereas “passive” elements were kept to the server, such as timers, former turn information and enemy placement.

TAGS: BGP, 5SD037

Week 3 of Big Game Project: Early UI and what to show the player.

Week 1 and 2 of Big Game Project: Production and Design

These two weeks saw the start of a project that is to span 8 weeks of development, where I will be taking the role of Producer along with part-time Designer and Programmer. The game being made is an isometric 3D-game, tactical and turn-based. Secret agendas are a main mechanic, where-in players will be given secret objectives to complete that include betraying other players and killing them before reaching the end game. In short, the Mechanics involve Moving and Attacking enemies and allies, the dynamics of which would introduce planning where and when to attack to support or divert the group, leading to Paranoia as to if they can rely on player information as our aesthetic.

First of all came the development of our Project Plan ( https://drive.google.com/open?id=1F39-gTY6bFOfjBnSBnTLX6IpsU3q6ZPePITMo1iKRyI ), Trello Scrum Document (https://trello.com/b/KHPsWZYo ) and Design Document ( https://drive.google.com/open?id=0B_-RqKRH1s5yakJWNk4zWkNtT0E ), where I was responsible for the main writing of all three.

Our programmers meanwhile developed our first prototypes for game movement, server and client connections and our main input mode, so as to allow us to test the game as early as possible. To prepare for this I organized several testing sessions for mechanics that were to be included in the alpha version of the game, including initiative order for characters on the field, separating enemy and player turns as well as player endurance and difficulty.

Iterative testing was utilized to produce each specific set of content, in this case mechanics and how they are introduced. A paper prototype (Image for this is missing unfortunately) was utilized before the main game prototype was completed, consisting of a single room with base characters with similar abilities to the ones imagined for the final game.  Iterations were tested by group members during the first week and people outside the group in the second to see if the implementations would be easy to understand.

 

 

(Image linked from Gamasutra, Making Better Games Through Iteration)

First to be tested was Initiative Order, as earlier testing had already produced attack and movement designs. The initial test was for set initiative values between 1-5 for each player and enemy, with an average of 2.5 out of 5 for each enemy to give them a predictable pattern of movement. The effect however was games that ended in the favor of the player most turns played as they would nearly always go first. The second solution was to raise the enemy initiative by a point, to put the players on the defensive in the first few turns. However this resulted in an increase in difficulty that was insurmountable when playing against a concerted effort by the player representing the AI of the game. It was too difficult for players to communicate properly what their intentions were and how to gather together to defeat enemies with this disadvantage.

Due to this, a mechanic was introduced that let players “bid” initiative cards, from between 1-5 as before, splitting the player and enemy turns to make the initiative value influence the dynamic between players more rather than be against just the enemy. This meant that with proper planning and co-operation players had the ability to beat most foes as long as their communication was honest, which meant objectives that pushed for betrayal needed to be looked out for.

While this seemed to solve our problems, it created new ones in the form of people choosing the same initiative and resolving who goes first in that case. Since it has not been tested on a screen as of yet, it may also introduce a concept that is difficult to grasp or takes up too much time or space when played instead of the automatic solution of set values from before.

This testing proceeded with single changes between iterations for player health, the larger campaign the game is intended for with several rooms as well as Character Class abilities yet to be implemented. Each alteration was written down after evaluation to give an explanation for each change that each group member could partake in.

Next week should partially move away from testing and move towards Programming and Producer solutions.

Tag : 5SD037, BGP

Week 1 and 2 of Big Game Project: Production and Design

5SD046 Knytt Assignment

As part of our course in Advanced Game design, this blog will detail my work with creating a Knytt level, the parts I worked on specifically and the ones I collaborated on.

Our first day had our group of 3 brainstorming ideas of what the theme of the level would be, as well as the story being told. Our assignment prohibited us from using text to communicate with the player, and so our theme would need to be simple given the timeframe we had been given (2 weeks).

Our first 3 suggestions were as follows: A literal journey up to the top of a mountain within a jungle,  reaching the end of a space station symbolizing a being’s birth and development into a more adult person, and exploring a person’s mind on the hunt for lost memories.

The latter idea was discarded early on due to lack of understanding of the game’s editor: How would this idea be communicated to the player without words, and how would the memories be represented in the timeframe given?

Our remaining two ideas were mashed together, to create a narrative with a twist: The player would start out in a jungle area, that slowly transforms as they move deeper into it, making the player realize they’ve been in the hydroponics area of a space station. Outside of the station a planet and a rocket can be spotted, giving them something to strive for, an escape from the conundrum they find themselves in.

Most movement would go towards the right, as there was not enough time to produce a truly sprawling level, and a gamers instinct is to move from left to right in a platformer game. The level itself would mainly take the form of a snake, taking the player forwards and upwards until it dropped down swiftly, giving the player one final challenge to climb upwards before the climax of the level. They would then taper out moving forwards along a docking bay, towards a rocket that ends the level as they climb aboard.

What was lacking this early on was the player’s motivation or goal: What were they supposed to strive towards, and how did we communicate to the player the mystery of the jungle? That was to come in a later update, but the general idea was agreed upon here.

Next Post should detail how the level’s framework was designed, and how work assignments were given out amongst a group of 3 when only one level can be worked on at a time.

 

 

5SD046 Knytt Assignment

5SD033 Tutorials

Badly Oriented Tutorial ButtonsThis week has mostly seen smaller fixes in our codebase, as we are essentially feature-frozen. However as feedback has continously asked for better explanations ingame on how to actually play the game, this post will detail how I have implemented our TutorialButton class, that will display a sprite on screen and remove it once a player presses the appropriate button. It is not a hard lesson, but there really is not much else to speak of this week as pretty much all important systems are already in the game or being tweaked by other members of our team.

Classes all on their own

We needed a tutorial, but we did not exactly have the time to create a proper manager for it as that time was needed to apply final fixes to unused code, High Score and so on. We did not however want more hardcoded entries in the Game State, and so I decided to make the buttons that appeared as their own class, not functions in the state itself.

The object would need a sprite (not a pointer to one as each button is different), an assigned key , a position on screen and a rotation, all set as arguments (since the objects always appear at the same position, are only ever created once, and should not be re-used after the tutorial portion is over).

After that, a bool is assigned as true in the constructor, that keeps track of wether a button is to be shown or not. The object’s Draw function is run in our Gamestate’s draw portion, and executes if the aforementioned bool is true.

Quick teaching

To get the button not to show up on the player screen anymore, an SFML event is created and checked via the isKeyPressed function of that event, checking against the assigned key or simply setting its assigned bool directly, as seen in the case below:

case sf::Event::KeyPressed:
switch (e.key.code)
{
case sf::Keyboard::Escape:
marketValue = 0;
break;
case sf::Keyboard::Space:
if (spaceButton->getShown() == true)
{
spaceButton->setShown(false);
}
turretManager.AddTurret(player.getPosition(), player.getRotation(), marketValue);
break;
case sf::Keyboard::A:
if (aButton->getShown() == true)
{
aButton->setShown(false);
}
break;
case sf::Keyboard::D:
if (dButton->getShown() ==true)
{
dButton->setShown(false);
}
break;

[/cpp]

As seen above, since there’s no need to constantly set the value to false after it has already been set once, I check if it has already been set via a Getter function that checks the appropriate bool value.

Because of the limited scope of these objects, the may as well be created as pointers, deleted when their use is over.

This will be my last blog post for this course, and it has been quite a learning experience. If this is the person reading this’ final blogpost to review, please do read some of my older posts regarding subjects like enemy tracking and movement, high score lists and screenclearing effects.

5SD033 Tutorials

5SD033 Benefits of pre-loaded sprites

Most work this week has been in optimizing our game, with few new additions beyond another projectile. However, as the new projectile is mostly a re-thread of older projectile types and the EMP class mashed together, I instead chose to make this blog post about the benefits of loading sprites for projectiles before they are used instead of when they are created.

 

Crafting Bullets

When we had first started making our projectile class, we created it to hold its own sprite in the constructor of the projectile, and to let other portions of our program like the enemy manager class contact that class for information on the sprite’s size and position. This made sense as we could directly influence each sprite this way when required, and and the prototype we created did not require swift loading to test our ideas.

 

 

projectiles
When loaded for each instance of the projectile class, this could result in between 20 to 40 different instances of the same projectiles.

Making a Mold

However, in the beta phase of development, when we had added more features and larger waves of enemies, slowdown was noticed on older and less efficient computers, requiring us to somehow lower the amount of memory being used at any one time. Enemies were altered, and I was assigned the job of altering how projectiles were loaded to ensure a lessened memory use.

To resolve this issue, texture-loading was done inside our manager class instead of each projectile, with a referenced adress passed as an argument to the projectiles as they were created. Since the sprites of projectiles were never altered beyond animations, this meant we could now load only 3-4 (depending on final projectile count)  different projectiles and use their loaded images and only assign positions to them seperately. Depending on what projectile was required, a switch was used to craft each one with the settings it required in a vector of projectiles. Working as intended, this should now lower memory requirements by almost 9/10ths, and would allow for us to modify the game to allow for many more “waves” of enemies and turrets to fire at them.


switch (type)
{
case PROJ_BULLET:
flash.setOrigin(38, 146);
muzzleFlash.push_back(flash);
projectiles.push_back(new Projectile(pos, angle, projectileTexture));
shotSFX.setPitch(rand() % 3 * 0.1f + 1);
shotSFX.play();
break;
case PROJ_MISSILE:
flash.setOrigin(38, 140);
muzzleFlash.push_back(flash);
missileSFX.setPitch(rand() % 3 * 0.1f + 1);
missileSFX.play();
projectiles.push_back(new Missile(pos, angle, enemyManager, projectileTexture));
break;
case PROJ_EMP:
EMPSFX.play();
projectiles.push_back(new EMP(pos, angle, enemyManager, projectileTexture));
break;
case PROJ_FREEZE:
projectiles.push_back(new FreezeShot(pos, angle, enemyManager,projectileTexture));
break;
case PROJ_FREEZPLOSION:
projectiles.push_back(new FreezeExplosion(pos, angle, enemyManager, projectileTexture));
default:
break;
}

Projectiles themselves now only save their collision information, type, time it has existed and their “strength”, or the variable that is subtracted by the Enemy Manager class when it notices a collision between a projectile and enemy. All other interaction is handled by their Manager class, destroying each projectile when it hits an enemy or has existed for too long (is out of bounds).

 

5SD033 Benefits of pre-loaded sprites