In this blog I will go over several of the most prominent design patterns used in game engine programming. I will not cover why they are considered good or bad; just their functionality. Design patterns are common code structures that can help streamline engine code and make it easier to use, not to mention more efficient. I will be going over the most important: singleton, factory, facade, and finite state machine.
A singleton is a very simple design pattern but it can be very beneficial to ensure to help manage how you use global variables. A singleton object may only have one instance of it at a time, ever. It has a single static member which can be of whatever type you specify. The only function it contains is a Get() function, to retrieve the data. Singletons are useful anywhere you only need a single object of any type, such as a renderer, mouse input, etc.
class Singleton
{
public:
static Singleton* getInstance();
Singleton();
private:
static Singleton* s_instance;
{
This simple class declaration shows the getInstance(); function which would retrieve the s_instance; provided that you have initialized it.
Singleton* Singleton::getInstance();
{
if(!s_instance)
s_instance = new Singleton();
return s_instance;
}
This is the getInstance(); function that will retrieve the member.
A factory relies on polymorphism to enable its functionality to accommodate as many objects types as you need. It essentially functions as its name implies; a factory can create multiple objects of various types based on the parameters you give it. Through polymorphism it is able to create different object types which inherit certain common attributes. Factories are great because you can easily add new types of objects for it to make, and just have them inherit most of the values, then include their specific ones.
Entity *entityFactory::createObject(int type)
{
Entity *object = null;
switch(type)
{
case cube:
object = new cube();
break;
case sphere:
object= new sphere();
break;
}
return object;
}
This code will create a new empty object, then assign it to whichever type of object you specify. It then gets returned as a cube/sphere etc. This can be done as many times as needed for any types of objects that need to be created (world objects, npc's, items, etc).
A finite state machine is often used for AI but can be applied to many different areas of a game engine. It basically keeps track of N number of states that can be changed/turned off and on. For example you could use it to manage which types of enemies are spawning in a particular level (each type of enemy would have on/off for spawning). It is not just limited to objects, but any number of types that you can represent with an object. There can be more than two states for any variable. FSM's are useful because each state for a switch can inherit the necessary functionality and data, making FSM's easy to extend and modify.
StateManager *newSwitch::createSwitch()
{
StateManger *enemyType = new state();
switch(enemy)
{
case normal:
enemyType = "normal";
break;
case strong:
enemyType = "strong";
break;
case boss:
enemyType = "boss";
break;
}
return enemyType;
}
The above code is a simple three-way state which determines which enemy type the current enemy will be. Through polymorphism, this code could be used to set the type of any class of enemy in the game. For example you could set a soldier to normal type and set an assassin to boss type.
A facade is similar to a factory but it does not actually create new objects; it manages them instead. It works like a manager at a company: overseeing and organizing all the 'workers' or classes underneath it. These can be singletons such as various subsystems in the engine (audio, input etc). The facade pattern is used to allow each of these components to communicate with each other more easily. Doing so allows the structure of the engine to become more modular and it becomes much easier to add/remove/modify specific components of it.
class ClassManager
{
public:
ClassManager *getClass();
protected:
ClassManager();
ClassManager *getClassType();
ClassManager (const &ClassManager();
private:
static Class* audioClass;
static Class* inputClass;
static Class* managerClass;
}
This is simple setup code for a facade-type manager class. It can get the name or type of a class and contains the set of singleton classes that it will be managing.
There are many other design patterns that I didn't cover such as iterators and observers, but they are also fairly simple and should definitely be used in your game engine if it will help the efficiency and ease-of-use for it. Design patterns are patterns because they are easily repeatable and can greatly enhance the modularity of your engine.
Continuing in the Game Dev course at UOIT, this year's blog will feature tidbits of Game Engine wisdom and AI techniques.
Wednesday, December 11
Tuesday, December 3
Simple Shader Tutorial
This blog will take you through the use of shaders in TwoLoc from basic setup to your first working shader (I'm learning as I teach!). It is based on the shader tutorial created by Saad Khattak. The first step is to ensure your base GLSL file is set up properly and ready to use. From there you can actually begin writing the vertex and fragment shaders.
Before you get started with any code, let make sure to set up our resources first. You'll need to make a few different files with different extensions. To start, set up a new .material file in the resources folder. Edit it with notepad as follows:
vertex_program infr3110vs glsl
{
source infr3110vs.glsl
default_params
{
param_named lightPos float4 1.0 1.0 1.0 1.0
}
}
fragment_program infr3110fs glsl
{
source infr3110fs.glsl
}
material matCrate
{
technique
{
pass
{
vertex_program_ref infr3110vs
{
}
fragment_program_ref infr3110fs
{
param_named diffuseMap int 0
}
texture_unit
{
texture CrateTexture.jpg 2d
tex_coord_set 0
}
}
}
}
This may seem like a lot of stuff, but its really quite the simple. The first just sets the name of the vertex shader (and fragment a bit lower down) for the program to look for. Next it creates a fairly default light at the specified coordinates. This is important for later when you calculate the lighting. Finally, a texture unit is created, which tells the program that the CrateTexture will be placed onto the texture coordinates.
In your .cpp file, make sure to tell the program the paths for your resource files, or they will fail to load at all! Also now that you're starting to work on the shaders, keep in mind that if anything doesn't work properly, Ogre will take over and its default shaders will be displayed instead of your own. This is actually a handy way to determine if everything is working as it should.
Here goes the vertex shader (in GLSL).
varying vec3 normal;
varying vec3 vertToLightVec;
varying vec2 UV;
uniform vec4 lightPos;
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
normal = gl_NormalMatrix * gl_Normal;
The 'varying' modifier defines a variable as something that will change with each vertex. This means that every vertex the shader inputs will have its own normal vector, vector to the light, and UV's. Uniform on the other hand, means the variable has been explicitly stated and it will always remain the same; in this case the light we added from the .material file.
Getting into the main function, you must always remember that shaders work with different viewing matrices, and that it is important to get the right one. For lighting purposes, we want each vertex multiplied by the ModelviewProjection matrix to manipulate it properly. Next we do the same thing, but for the vertex normals.
vec4 vertInMVSpace = gl_ModelViewMatrix * gl_Vertex;
vec4 lightPosInMVSpace = gl_ModelViewMatrix * lightPos;
vertToLightVec = vec3(lightPosInMVSpace.xyz - vertInMVSpace.xyz);
UV = vec2(gl_MultiTexCoord0);
}
After that we perform the necessary lighting calculations. We need to get the vertices in Modelview matrix so that the light's 'rays' and reflection off the normals can be calculated properly. Similary, the light coordinates need to be relative to the Modelview matrix. Once everything is in the same coordinate frame, we use simple vector subtraction to get the resultant vector between the light and that particular vertex.
The last line simply sets the UV coordinates to a variable which will be passed through to the fragment shader (so we can apply the texture per fragment, or else it will look weird). That's it for the vertex shader, now we are almost done!
The first part of the fragment shader should look similar to this:
varying vec3 normal;
varying vec3 vertToLightVec;
varying vec2 UV;
uniform sampler2D diffuseMap;
Here we declare variables inside the fragment shader. When you want data to move from vertex to fragment, make sure you declare it with the same variable in each. You'll see that the first three are the same as you have in the vertex shader; these are all passed through. The final variable is an explicitly set texture type. It tells the program that a 2D texture will be projected onto the object, in this case a diffuse texture map.
void main()
{
gl_FragColor = texture2D(diffuseMap, UV);
vec3 normalizedNorm = normalize(normal);
vec3 normalizedVertToLightVec = normalize(vertToLightVec);
To finish up, its a simple matter of applying the texture and calculating the remainder of the lighting. The first line sets the varying UV coordinates to the diffuse map texture, so each fragment will know which colour it is supposed to become.
The next two lines ensure that everything is nicely normalized before proceeding. This is very important as any non-normalized vectors at this point can drastically alter how the light behaves, and will most likely cause artifacts in your scene.
Going back to algebra, we know that getting the dot product between two vectors returns a scalar value. For lighting, this value determines the amount of light each fragment will receive based on the angle it faces relative to the light source. It is clamped between 0 and 1 to ensure the final multiplication doesn't become a weird number.
float diff = clamp(dot(normalizedNorm, normalizedVertToLightVec), 0.0, 1.0);
gl_FragColor = gl_FragColor * diff;
}
This is it! Here we are! One final calculation to end it! This final step takes the colour from every fragment of the texture, and multiplies it by the scalar value we just got to determine how bright (the final colour) every fragment will be. Congratulations on completing your first simple shader.
Keep in mind that from here the possibilities are endless. You can do anything from adding more lights, to changing the hue/saturation of fragments, or even add in a shadow map (that is where it starts getting tricky). I hope you learned as much as I did, and shade on!
Before you get started with any code, let make sure to set up our resources first. You'll need to make a few different files with different extensions. To start, set up a new .material file in the resources folder. Edit it with notepad as follows:
vertex_program infr3110vs glsl
{
source infr3110vs.glsl
default_params
{
param_named lightPos float4 1.0 1.0 1.0 1.0
}
}
fragment_program infr3110fs glsl
{
source infr3110fs.glsl
}
material matCrate
{
technique
{
pass
{
vertex_program_ref infr3110vs
{
}
fragment_program_ref infr3110fs
{
param_named diffuseMap int 0
}
texture_unit
{
texture CrateTexture.jpg 2d
tex_coord_set 0
}
}
}
}
This may seem like a lot of stuff, but its really quite the simple. The first just sets the name of the vertex shader (and fragment a bit lower down) for the program to look for. Next it creates a fairly default light at the specified coordinates. This is important for later when you calculate the lighting. Finally, a texture unit is created, which tells the program that the CrateTexture will be placed onto the texture coordinates.
In your .cpp file, make sure to tell the program the paths for your resource files, or they will fail to load at all! Also now that you're starting to work on the shaders, keep in mind that if anything doesn't work properly, Ogre will take over and its default shaders will be displayed instead of your own. This is actually a handy way to determine if everything is working as it should.
Here goes the vertex shader (in GLSL).
varying vec3 normal;
varying vec3 vertToLightVec;
varying vec2 UV;
uniform vec4 lightPos;
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
normal = gl_NormalMatrix * gl_Normal;
The 'varying' modifier defines a variable as something that will change with each vertex. This means that every vertex the shader inputs will have its own normal vector, vector to the light, and UV's. Uniform on the other hand, means the variable has been explicitly stated and it will always remain the same; in this case the light we added from the .material file.
Getting into the main function, you must always remember that shaders work with different viewing matrices, and that it is important to get the right one. For lighting purposes, we want each vertex multiplied by the ModelviewProjection matrix to manipulate it properly. Next we do the same thing, but for the vertex normals.
vec4 vertInMVSpace = gl_ModelViewMatrix * gl_Vertex;
vec4 lightPosInMVSpace = gl_ModelViewMatrix * lightPos;
vertToLightVec = vec3(lightPosInMVSpace.xyz - vertInMVSpace.xyz);
UV = vec2(gl_MultiTexCoord0);
}
After that we perform the necessary lighting calculations. We need to get the vertices in Modelview matrix so that the light's 'rays' and reflection off the normals can be calculated properly. Similary, the light coordinates need to be relative to the Modelview matrix. Once everything is in the same coordinate frame, we use simple vector subtraction to get the resultant vector between the light and that particular vertex.
The last line simply sets the UV coordinates to a variable which will be passed through to the fragment shader (so we can apply the texture per fragment, or else it will look weird). That's it for the vertex shader, now we are almost done!
The first part of the fragment shader should look similar to this:
varying vec3 normal;
varying vec3 vertToLightVec;
varying vec2 UV;
uniform sampler2D diffuseMap;
Here we declare variables inside the fragment shader. When you want data to move from vertex to fragment, make sure you declare it with the same variable in each. You'll see that the first three are the same as you have in the vertex shader; these are all passed through. The final variable is an explicitly set texture type. It tells the program that a 2D texture will be projected onto the object, in this case a diffuse texture map.
void main()
{
gl_FragColor = texture2D(diffuseMap, UV);
vec3 normalizedNorm = normalize(normal);
vec3 normalizedVertToLightVec = normalize(vertToLightVec);
To finish up, its a simple matter of applying the texture and calculating the remainder of the lighting. The first line sets the varying UV coordinates to the diffuse map texture, so each fragment will know which colour it is supposed to become.
The next two lines ensure that everything is nicely normalized before proceeding. This is very important as any non-normalized vectors at this point can drastically alter how the light behaves, and will most likely cause artifacts in your scene.
Going back to algebra, we know that getting the dot product between two vectors returns a scalar value. For lighting, this value determines the amount of light each fragment will receive based on the angle it faces relative to the light source. It is clamped between 0 and 1 to ensure the final multiplication doesn't become a weird number.
float diff = clamp(dot(normalizedNorm, normalizedVertToLightVec), 0.0, 1.0);
gl_FragColor = gl_FragColor * diff;
}
This is it! Here we are! One final calculation to end it! This final step takes the colour from every fragment of the texture, and multiplies it by the scalar value we just got to determine how bright (the final colour) every fragment will be. Congratulations on completing your first simple shader.
Keep in mind that from here the possibilities are endless. You can do anything from adding more lights, to changing the hue/saturation of fragments, or even add in a shadow map (that is where it starts getting tricky). I hope you learned as much as I did, and shade on!
Tuesday, November 19
Ogre Scene Graph
I am currently working on upgrades for the Game Engine questions, so I'm going to dedicate this blog to scene graphs and their contribution to programming. In particular I will focus on their use for the solar system question.
A scene graph is essentially a tree hierarchy which contains all of the nodes and entities in your scene. At the very top is the root node, which is a parent of everything else in the scene. Underneath the root node will be all of its children, attached like branches. Each node can contain an entity, which can be a mesh or light or locator etc. The great thing about these nodes is that they each contain information about their respective entity, and keep everything organized.
A node contains information such as its parent, child, and all data pertaining to its attached entity. The entity will have values such as position, orientation, etc.
Ogre::Entity* Sun = mSceneMgr->createEntity("Sun","Sun.mesh");
Ogre::SceneNode* node = mSceneMgr->createSceneNode("RootNode");
mSceneMgr->getRootSceneNode()->addChild(SunNode);
SunNode->attachObject(Sun);
The code above will create an entity, the Sun, and a root node. It then adds a child to the
root node and attaches the entity to it. Since there were no translations involved, the entity should be placed at the origin, with no scaling or rotations from its original orientation. For the solar system question, this would be a good place to add the Sun, since all planets (and therefore moons) will rotate around it. This will work because entities will rotate relative to their parent node's entity.
SunNode->addChild(EarthNode);
EarthNode->setPosition(100,0,0);EarthNode->attachObject(Earth);
Next we'll add a planet such as Earth to the scene graph tree, and offset it from the Sun's position. Now we have a child to the root node which is offset by 100 units in the x-axis. This new node also contains an entity, this time being the Earth mesh. It can be rotated on its own, but we want it to rotate based on the Sun's orientation. To achieve this revolving motion we will rotate the Sun itself.
SunNode->pitch(Ogre::Radian(Ogre::Math::HALF_PI));
Now the Sun has rotated 90 degrees, and the Earth will revolve around the sun, since it is the child node. Also make sure to keep in mind what coordinate space you are doing transformations in. A child note that has been rotated will translate differently, such as local space vs world space. Since not all planets are the same size (obviously!), the following function will also be needed if you have not pre-built every planet to scale.
EarthNode->scale(0.1f,0.1f,0.1f);
Using these simple functions, you can easily build the entire solar system, moons included, starting from the sun at the centre. Aside from creating a tree of nodes, the Ogre scene graph has many additional functions that can be taken advantage of, to create more complex scenes.
A scene graph is essentially a tree hierarchy which contains all of the nodes and entities in your scene. At the very top is the root node, which is a parent of everything else in the scene. Underneath the root node will be all of its children, attached like branches. Each node can contain an entity, which can be a mesh or light or locator etc. The great thing about these nodes is that they each contain information about their respective entity, and keep everything organized.
A node contains information such as its parent, child, and all data pertaining to its attached entity. The entity will have values such as position, orientation, etc.
Ogre::Entity* Sun = mSceneMgr->createEntity("Sun","Sun.mesh");
Ogre::SceneNode* node = mSceneMgr->createSceneNode("RootNode");
mSceneMgr->getRootSceneNode()->addChild(SunNode);
SunNode->attachObject(Sun);
The code above will create an entity, the Sun, and a root node. It then adds a child to the
root node and attaches the entity to it. Since there were no translations involved, the entity should be placed at the origin, with no scaling or rotations from its original orientation. For the solar system question, this would be a good place to add the Sun, since all planets (and therefore moons) will rotate around it. This will work because entities will rotate relative to their parent node's entity.
Source: http://www.packtpub.com/article/ogre-scene-graph |
SunNode->addChild(EarthNode);
EarthNode->setPosition(100,0,0);EarthNode->attachObject(Earth);
Next we'll add a planet such as Earth to the scene graph tree, and offset it from the Sun's position. Now we have a child to the root node which is offset by 100 units in the x-axis. This new node also contains an entity, this time being the Earth mesh. It can be rotated on its own, but we want it to rotate based on the Sun's orientation. To achieve this revolving motion we will rotate the Sun itself.
SunNode->pitch(Ogre::Radian(Ogre::Math::HALF_PI));
Now the Sun has rotated 90 degrees, and the Earth will revolve around the sun, since it is the child node. Also make sure to keep in mind what coordinate space you are doing transformations in. A child note that has been rotated will translate differently, such as local space vs world space. Since not all planets are the same size (obviously!), the following function will also be needed if you have not pre-built every planet to scale.
EarthNode->scale(0.1f,0.1f,0.1f);
Using these simple functions, you can easily build the entire solar system, moons included, starting from the sun at the centre. Aside from creating a tree of nodes, the Ogre scene graph has many additional functions that can be taken advantage of, to create more complex scenes.
Wednesday, November 6
Working with MyGUI
As part of our revamped game idea for Capstone, our tool needs for Game Engine purposes has significantly changed. Our first tool for Vye King remained a comprehensive model viewer, and is currently completed except for advanced shader functionality. The remaining tools are various component editors for Vye King, ranging from an objective editor, to a creature and node editor. The core of these three editors is MyGUI, as they are menu-based editors which can be used to change game parameters. We are planning to link them all into one multifaceted editing tool.
The main editor will allow the user to create and edit custom objectives for the player in-game. The core game play in Vye King revolves around exploring the island and seeking these objectives, so this tool is being made to add replayability to the experience with custom content. This tool contains several GUI elements: text boxes for inputting the objective name and description, buttons to filter the type of objective, and another set of buttons to set the rarity of the objective, ranging from common to rare. Additionally, the player can specify a percentage of this objective becoming active (there will be a list the game chooses from).
The main part of this editor is the node map on the bottom left. On this map, users can click up to 4-5 nodes where the objective can possibly spawn. In our first iteration, we plan to only have pre-determined nodes to choose from, to prevent objectives spawning in impossible to reach places. Each of the 3 filters at the top right will change which nodes are active on the map. This is because certain objective types make more sense in certain locations in the level.
When the particular objective is chosen to become active in game, it will pick one of these node locations at random to create the objective at. Once the player completes the objective one (or several) more objectives will become active. The objectives are only limited by the types of creatures and objects we create for the game. Players are free to create interesting descriptions and parameters for their custom objectives.
Next I will cover the three sub-editors which match the three objective types. The main editor will contain a buttom called "Customize" to further edit their objective.
In the Kill editor, the user first loads a creature from the drop-down list at the top. We will include a model-viewer for this editor so the user can preview the creature they choose. The 'Boss' check box amplifies the size and power of the creature when active. At the bottom the user can also specify a reward (from an item list) for the reward if they succeed in defeating the creature. This can range from food to tools to any other resources in the game.
The main editor will allow the user to create and edit custom objectives for the player in-game. The core game play in Vye King revolves around exploring the island and seeking these objectives, so this tool is being made to add replayability to the experience with custom content. This tool contains several GUI elements: text boxes for inputting the objective name and description, buttons to filter the type of objective, and another set of buttons to set the rarity of the objective, ranging from common to rare. Additionally, the player can specify a percentage of this objective becoming active (there will be a list the game chooses from).
Main objective editor layout. |
When the particular objective is chosen to become active in game, it will pick one of these node locations at random to create the objective at. Once the player completes the objective one (or several) more objectives will become active. The objectives are only limited by the types of creatures and objects we create for the game. Players are free to create interesting descriptions and parameters for their custom objectives.
Next I will cover the three sub-editors which match the three objective types. The main editor will contain a buttom called "Customize" to further edit their objective.
In the Kill editor, the user first loads a creature from the drop-down list at the top. We will include a model-viewer for this editor so the user can preview the creature they choose. The 'Boss' check box amplifies the size and power of the creature when active. At the bottom the user can also specify a reward (from an item list) for the reward if they succeed in defeating the creature. This can range from food to tools to any other resources in the game.
Kill Creature editor. |
The Collect editor is fairly simple; the user specifies the item to collect, and the quantity the player must find. Again they can also customize the reward the player will receive.
Collect editor. |
Finally, the Find editor will hide an object or NPC near the vicinity of the node which the player must then meticulously search for. The item they find will be the reward or the NPC will give them a reward for finding them, with this objective type.
Find editor. |
Now that all the objective types have been defined, there is one other feature to cover. After creating and finalizing an objective, the user may then choose to add another in an 'objective chain'. In these chains, only the final objective will give the player a reward. Once the player is satisfied with all their objectives, this information is saved to an XML which the game can parse to load in the objective data. While playing the game, the character can be given any custom objective made from these editors.
Finalize screen. |
Sunday, November 3
Vye King Cameras
Our game Vye King is a third person survival-action game set on an island several hundred years ago. In it your character must explore the island and search for resources to help you survive. Along the way, you will receive additional objectives and find new perils which you must overcome. Our camera will be in the third person perspective, similar to the style God of War or Prince of Persia uses.
The camera will loosely follow your character as you traverse the island. When you can change direction, the camera will follow a spline and rotate around your character to make sure you always see ahead of your character, in whatever direction he is going. Our camera movement will utilize the traditional catmull-rom splines. We plan to make the interpolation as smooth as possible to provide a very fluid and agile feel to the camera movement.
Any indoor sections of our game will feature a similar camera system, but more zoomed in to almost an 'over the shoulder' distance. This will be more effective in areas such as narrow hallways or tunnels, so you can see further ahead of your character. Optionally we are considering an option to switch to first-person perspective in cramped areas. This will be a standard first person camera which points toward where your character is looking.
Additionally we plan to have an in-game cinematic camera. This will function based on various triggers in the game world. The basic idea is that while your character is exploring the island, objectives or enemies will cause the camera to suddenly turn toward it. The amount of rotation is based on the proximity or importance of the event. For example, if a large dangerous enemy is about to ambush you from the side, the camera will rapidly turn right to view both you and where it is about to jump out from. This will help the player avert death and find new objectives.
In cases where it is a minor enemy or objective, the camera will make a much smaller shift to the side. This alerts a player to a nearby event but they will know it is not something extremely dangerous or important. The goal of this added camera feature is to create a more visceral experience, emphasizing the dangerous nature of the character's surroundings. This type of cinematic camera can be seen in games such as Fable or Bioshock Infinite, the difference being our camera will not be activated by a button, but rather intuitively activates based on the respective trigger.
In situations where large creatures do attack the player, we are planning to have small quick-time events which the player must overcome in order to survive. In these situations a zoomed-in cinematic camera will activate, getting very close to the action. This will create a very intense atmosphere, where the player feels like they are right in the game world and connected with what is happening to their character.
The combination of these three (and possible the first-person) cameras achieve to hit a balance between giving the player a comprehensive view of their surroundings while specializing when necessary. The cameras will let them openly explore our large game world while simultaneously providing insight into imminent dangers or interesting areas. Essentially this will all be done with one camera which performs the various behaviours depending on what is happening to the character. We believe this will provide an organic feeling to the game. The best cameras are the ones you don't notice at all.
The camera will loosely follow your character as you traverse the island. When you can change direction, the camera will follow a spline and rotate around your character to make sure you always see ahead of your character, in whatever direction he is going. Our camera movement will utilize the traditional catmull-rom splines. We plan to make the interpolation as smooth as possible to provide a very fluid and agile feel to the camera movement.
Any indoor sections of our game will feature a similar camera system, but more zoomed in to almost an 'over the shoulder' distance. This will be more effective in areas such as narrow hallways or tunnels, so you can see further ahead of your character. Optionally we are considering an option to switch to first-person perspective in cramped areas. This will be a standard first person camera which points toward where your character is looking.
Additionally we plan to have an in-game cinematic camera. This will function based on various triggers in the game world. The basic idea is that while your character is exploring the island, objectives or enemies will cause the camera to suddenly turn toward it. The amount of rotation is based on the proximity or importance of the event. For example, if a large dangerous enemy is about to ambush you from the side, the camera will rapidly turn right to view both you and where it is about to jump out from. This will help the player avert death and find new objectives.
In cases where it is a minor enemy or objective, the camera will make a much smaller shift to the side. This alerts a player to a nearby event but they will know it is not something extremely dangerous or important. The goal of this added camera feature is to create a more visceral experience, emphasizing the dangerous nature of the character's surroundings. This type of cinematic camera can be seen in games such as Fable or Bioshock Infinite, the difference being our camera will not be activated by a button, but rather intuitively activates based on the respective trigger.
In situations where large creatures do attack the player, we are planning to have small quick-time events which the player must overcome in order to survive. In these situations a zoomed-in cinematic camera will activate, getting very close to the action. This will create a very intense atmosphere, where the player feels like they are right in the game world and connected with what is happening to their character.
The combination of these three (and possible the first-person) cameras achieve to hit a balance between giving the player a comprehensive view of their surroundings while specializing when necessary. The cameras will let them openly explore our large game world while simultaneously providing insight into imminent dangers or interesting areas. Essentially this will all be done with one camera which performs the various behaviours depending on what is happening to the character. We believe this will provide an organic feeling to the game. The best cameras are the ones you don't notice at all.
Monday, October 21
Tool Progress
Now that our GDW team has been making progress on tool development for our capstone project, I will explain what we've done and how it works.
The first and probably most important tool we have been working on is the comprehensive model viewer. The purpose of this tool is to give team members a preview of any game assets with textures, lighting/shading, and other post-processing effects. This ensures the assets are up to the required quality level as well as form a cohesive theme with all other assets within the game.
After starting with a base project using the Project Generator, the first step was to set up a camera system. Since this tool is a model viewer, it was sufficient to set up a TwoLoc MayaCam. The code is very simple:
mCam = mMgr->createCamera("MainCamera");
mCam->setAspectRatio(Ogre::Real(OGRE_CORE->mViewport->getActualWidth()) /
Ogre::Real(OGRE_CORE->mViewport->getActualHeight()));
mCam->setNearClipDistance(1.0f);
mCam->setFarClipDistance(10000.0f);
OGRE_CORE->mViewport->setCamera(mCam);
OGRE_CORE->mViewport->setBackgroundColour(Ogre::ColourValue(0.1f, 0.1f, 0.1f));
mCam->setPosition(0.0f, 20.0f, 0.5f);
mCam->lookAt(0, 0, 0);
OGRE_CORE->AttachMayaCam(mCam);
Going through this, the code creates a MayaCam, then sets its various viewing parameters. It then tells Ogre to use this camera and sets a default background colour to display the object on. Now that the camera exists and is activated, you then set up the "physical" properties of the camera, such as where it is and which direction it is looking. Finally, Ogre attaches the camera to the scene and it is ready to use. The maya cam uses the default control scheme as Maya so users are familiar with it.
Additionally we implemented code to switch between solid and wireframe model views.
void FBXViewer::cameraMode()
{
if(mCam->getPolygonMode()==PM_SOLID)
mCam->setPolygonMode(PM_WIREFRAME);
else
mCam->setPolygonMode(PM_SOLID);
}
Using the provided FBX Loader and myGUI, we can browse through files and load any model that we want. To properly view the model, we added in a default light which enables the viewer to clearly see whichever object is loaded.
Ogre::Light * FBXViewer::createLight()
{
//////////////////////////////////////////////////////////////////////////
//Adding in the light pointLight
Ogre::Light *pointLight = mMgr->createLight("pointLight");
pointLight->setType(Ogre::Light::LT_POINT);
pointLight->setPosition(Ogre::Vector3(0.0f,5.0f,0.0f));
pointLight->setDiffuseColour(1.0, 0.0, 0.0);
pointLight->setSpecularColour(1.0, 0.0, 0.0);
return pointLight;
}
The code is simple; it creates a light and initializes the standard light settings. Now with an fbx loader, lighting, camera, and viewing modes, we have all the basic feature we need for a model viewer. In the coming week our team will be adding more complex features such as shaders, multi-object loading, multiple viewports, and other small tweaks.
The first and probably most important tool we have been working on is the comprehensive model viewer. The purpose of this tool is to give team members a preview of any game assets with textures, lighting/shading, and other post-processing effects. This ensures the assets are up to the required quality level as well as form a cohesive theme with all other assets within the game.
After starting with a base project using the Project Generator, the first step was to set up a camera system. Since this tool is a model viewer, it was sufficient to set up a TwoLoc MayaCam. The code is very simple:
mCam = mMgr->createCamera("MainCamera");
mCam->setAspectRatio(Ogre::Real(OGRE_CORE->mViewport->getActualWidth()) /
Ogre::Real(OGRE_CORE->mViewport->getActualHeight()));
mCam->setNearClipDistance(1.0f);
mCam->setFarClipDistance(10000.0f);
OGRE_CORE->mViewport->setCamera(mCam);
OGRE_CORE->mViewport->setBackgroundColour(Ogre::ColourValue(0.1f, 0.1f, 0.1f));
mCam->setPosition(0.0f, 20.0f, 0.5f);
mCam->lookAt(0, 0, 0);
OGRE_CORE->AttachMayaCam(mCam);
Going through this, the code creates a MayaCam, then sets its various viewing parameters. It then tells Ogre to use this camera and sets a default background colour to display the object on. Now that the camera exists and is activated, you then set up the "physical" properties of the camera, such as where it is and which direction it is looking. Finally, Ogre attaches the camera to the scene and it is ready to use. The maya cam uses the default control scheme as Maya so users are familiar with it.
Additionally we implemented code to switch between solid and wireframe model views.
void FBXViewer::cameraMode()
{
if(mCam->getPolygonMode()==PM_SOLID)
mCam->setPolygonMode(PM_WIREFRAME);
else
mCam->setPolygonMode(PM_SOLID);
}
Using the provided FBX Loader and myGUI, we can browse through files and load any model that we want. To properly view the model, we added in a default light which enables the viewer to clearly see whichever object is loaded.
Ogre::Light * FBXViewer::createLight()
{
//////////////////////////////////////////////////////////////////////////
//Adding in the light pointLight
Ogre::Light *pointLight = mMgr->createLight("pointLight");
pointLight->setType(Ogre::Light::LT_POINT);
pointLight->setPosition(Ogre::Vector3(0.0f,5.0f,0.0f));
pointLight->setDiffuseColour(1.0, 0.0, 0.0);
pointLight->setSpecularColour(1.0, 0.0, 0.0);
return pointLight;
}
The code is simple; it creates a light and initializes the standard light settings. Now with an fbx loader, lighting, camera, and viewing modes, we have all the basic feature we need for a model viewer. In the coming week our team will be adding more complex features such as shaders, multi-object loading, multiple viewports, and other small tweaks.
Wednesday, October 9
Wreaking Havok
There are several steps to setting up Havok from a Maya scene, but it is a great physics engine and is worth the effort. To begin with, make sure your Havok plugin is enabled in Maya, under the plugin manager. Now you should have a Havok tab which contains all its specific features. To create a simple bouncing ball scene, start with a plane and a sphere raised above it.
Next you must select each object in the scene and click the RB button in the Havok tab. This converts the object into a rigid body that Havok will later apply physics to. Once your objects are rigid bodies, you can edit their setting in Maya to change mass, centre of gravity, restitution etc. It is important to add mass to any object you wish gravity to affect, otherwise it will just float there motionless.
Next you must perform a Havok export on the scene. This window contains all the options for configuring your Havok physics file. There are many options you can play with, but there are some mandatory features you must add to the configuration (which must be in the correct order). At the very least, you must transform the scene (which converts it to the correct coordinate frames) and write to file (saves the proper format you specify). Obviously for our scene, we will also need to create rigid bodies and create a world for them to be in. These 2 allow objects to simulate having physical properties. It is necessary for gravity and collisions to work properly.
Also note that you can choose the XML format so that it can be read by you. Once you have run the particular configuration you have set up, your scene is ready for export. It is recommended to export each object individually, to keep your files organized. For this example, we will use the export all feature of Maya, and export the scene as a .FBX. This is the file type that Havok prefers, and will support the most features.
Before we get into TwoLoc, keep in mind there are some constraints for Havok scenes. You can create materials and lighting for your objects, but you cannot do everything. First, some types of lighting (ambient) and transformations are restricted, and more importantly you cannot use any material other than lambert on your objects. Also, only single colours are allowed if you don't put a texture on your models. To preview your scene, you may add the preview tool to the Havok configuration.
With both the .HBK and .FBX files saved, you now go into TwoLoc and find the FBX Loader code. Within the code there is a section to specify the file paths for your respective files (also check your textures). Make sure these are correct, and then simply run the program. If all steps were done correctly, the scene you created in Maya should now be running from TwoLoc. To double check that Havok is working, you can open up the Havok Visual Debugger and it should display the exact same thing. In the VD you can even manipulate objects and they will be updated in real-time in your game engine.
Finally, whenever you update your model or texture/lighting in Maya, you must go into the Havok files and delete the respective cache files. TwoLoc will not update anything until these have been deleted. If your model doesn't look right in your engine, make sure the cache has been deleted!
Using these simple concepts, you can create increasingly complex Havok scenes to use physics with. Its all about experimentation, and playing around with the many different settings that Havok provides.
Next you must select each object in the scene and click the RB button in the Havok tab. This converts the object into a rigid body that Havok will later apply physics to. Once your objects are rigid bodies, you can edit their setting in Maya to change mass, centre of gravity, restitution etc. It is important to add mass to any object you wish gravity to affect, otherwise it will just float there motionless.
An example scene in Maya. Note the mass of 1 for the sphere. |
You can also add 'Bake Scale' to ensure the engine does not have to perform scale calculations. |
Before we get into TwoLoc, keep in mind there are some constraints for Havok scenes. You can create materials and lighting for your objects, but you cannot do everything. First, some types of lighting (ambient) and transformations are restricted, and more importantly you cannot use any material other than lambert on your objects. Also, only single colours are allowed if you don't put a texture on your models. To preview your scene, you may add the preview tool to the Havok configuration.
The ball has just bounced off the plane. Everything seems to be working so far. |
It works! |
Using these simple concepts, you can create increasingly complex Havok scenes to use physics with. Its all about experimentation, and playing around with the many different settings that Havok provides.
Thursday, September 26
Tools of the Trade
Since we've been discussing the many uses and functionalities of game engines, this blog I will go over the tools my GDW group plans to create in TwoLoc, in conjunction with the GDW and Capstone. Our group (Gallium Gaming) has been working on a multiplayer game over the summer using HeroEngine; for GDW we have been approved to create tools for our game engine instead of a whole new game. This also ties into our Capstone Project, as creating and marketing the game will make up our project parameters.
The tools we make will be created using OOP and Game Engine concepts. We have planned to make tools which will provide extra functionality our project needs to help streamline the production process. With that in mind, our first tool will be a batch image converter. HeroEngine prefers textures be in .DDS (Direct draw surface) format because it uses DirectX for rendering, and .DDS files can be used in several different ways. .DDS files are also readily useable in OpenGL via the ARB texture compression in GLSL.
We chose this as our first tool because we will need to convert all our texture files to .DDS prior to uploading them in the HeroEngine repository. A batch converter will remove the tedious process of saving files as .DDS in photoshop (only after installing the proper plugin from nVidia). The idea is that because a game engine excels at processing and converting data, we can use this functionality to convert any number of texture file types into .DDS's. Polymorphism will be very useful because we can use a virtual function like Convert(); on each input file and they will all be converted, even if there are several different image formats.
The next tool we are planning to create is a HeroEngine model viewer. There is already an existing one, but it only features the geometry and base texture of the model. HeroEngine uses .HGM mesh files to represent objects instead of the standard .OBJ format. .HGM's are special in that they tell HeroEngine the exact shape of the mesh (whether concave or hollow) for collision purposes. We plan to create our model viewer with additional functionality such as applying shaders and post-processing effects to the models.
Once again polymorphism will help keep this tool efficient, as we can Load(); any model and then manipulate it however we want. This can include characters, objects, and any other 3D art assets. We are building this advanced model viewer so we can preview the look of our assets without going through the whole process of getting a model into the game world. This consists of uploading the model to the repository, importing it into the HeroEngine library, and finally loading it into the game world.
Finally, we are planning to create a couple smaller export tools for Autodesk Mudbox and Google Sketchup. The exporters will convert their respective file formats into .HGM files that the HeroEngine can then use. These tools will have a single purpose each, and thus are smaller and lower priority for the game. Like the first two tools, they are primarily designed to streamline the asset production process. We plan to have a large variety of weapons and objects in our game, thus need to be able to develop them rapidly. Once built, these tools will decrease production times in the long run.
The tools we make will be created using OOP and Game Engine concepts. We have planned to make tools which will provide extra functionality our project needs to help streamline the production process. With that in mind, our first tool will be a batch image converter. HeroEngine prefers textures be in .DDS (Direct draw surface) format because it uses DirectX for rendering, and .DDS files can be used in several different ways. .DDS files are also readily useable in OpenGL via the ARB texture compression in GLSL.
We chose this as our first tool because we will need to convert all our texture files to .DDS prior to uploading them in the HeroEngine repository. A batch converter will remove the tedious process of saving files as .DDS in photoshop (only after installing the proper plugin from nVidia). The idea is that because a game engine excels at processing and converting data, we can use this functionality to convert any number of texture file types into .DDS's. Polymorphism will be very useful because we can use a virtual function like Convert(); on each input file and they will all be converted, even if there are several different image formats.
The next tool we are planning to create is a HeroEngine model viewer. There is already an existing one, but it only features the geometry and base texture of the model. HeroEngine uses .HGM mesh files to represent objects instead of the standard .OBJ format. .HGM's are special in that they tell HeroEngine the exact shape of the mesh (whether concave or hollow) for collision purposes. We plan to create our model viewer with additional functionality such as applying shaders and post-processing effects to the models.
Once again polymorphism will help keep this tool efficient, as we can Load(); any model and then manipulate it however we want. This can include characters, objects, and any other 3D art assets. We are building this advanced model viewer so we can preview the look of our assets without going through the whole process of getting a model into the game world. This consists of uploading the model to the repository, importing it into the HeroEngine library, and finally loading it into the game world.
Finally, we are planning to create a couple smaller export tools for Autodesk Mudbox and Google Sketchup. The exporters will convert their respective file formats into .HGM files that the HeroEngine can then use. These tools will have a single purpose each, and thus are smaller and lower priority for the game. Like the first two tools, they are primarily designed to streamline the asset production process. We plan to have a large variety of weapons and objects in our game, thus need to be able to develop them rapidly. Once built, these tools will decrease production times in the long run.
Wednesday, September 18
Learning from Errors (Argh!!)
The past two weeks we have focused on setting up all the necessary components of the TwoLoc game engine. In hindsight it was fairly simple but it at the time it felt like there were a million steps and errors along the way. In the end I actually had to delete everything related to TwoLoc and restart the process, my laptop nearly breaking in the process. This will be a short story of the setup process and what I learned along the way.
To begin, we got two links for BitBucket, a file-sharing program with an online server. In conjunction with TortoiseHG, we used the two links provided in the tutorial to clone repositories onto our local machine. There were two main files: The dependencies and the engine itself. The dependencies are all the files which the engine requires to run, while the engine executes the programs itself. After a lengthy process (everyone was cloning at the same time), we now had a client-side version of the engine and dependencies.
The next step was to install the PATH directories for the dependencies. We opened up a .bat file which performed this task. To make sure everything worked properly, we installed the Rapid Environment Editor, a program to easily manage file paths. The key is to ensure the environment path matches your local files. Everything is going well so far, but it soon went downhill.
With the paths setup properly, we went into the dependency solution and performed a build (making sure it is set to Debug, not Debug_dll). At this point the errors began. The errors indicated that the program could not find a file called dhinput.cpp. Saad and I looked through and found the file, so it was definitely there. After a bit of investigating, we discovered one of the file paths in the include directory was missing a backslash. We made sure to fix this in the REE as well.
With that done, a new build of the dependency solution proved more successful. The rest is simply repeating the process for the TwoLocEngine. After clicking the install.bat file, the path setup only took 2 seconds which seemed suspicious. Upon trying to build the engine, there were missing file and linker errors all over the place. Checking back to the REE, I found the path did not get set up properly for the engine. Fixing that, I tried again and got a corrupt library error. At this point I asked Saad for help and we fixed it, but the error was reoccurring.
Getting very frustrated, I deleted everything to do with TwoLoc and started from scratch. A couple blue screens and a Windows Recovery later, I re-cloned the repositories and rebuilt all the solutions. This time I checked every path before-hand, ensuring nothing went wrong along the way. Sure enough the 2nd attempt was much smoother and I could actually try out some of the engine samples.
Once in the engine, you must first select a project and set it as the Startup Project. This means it will open all pertinent files for only that project when you try to debug. One final step is to copy the linker directories into the debug working directory. This makes sure all the files end up in the proper location once you run the project.
Yay! The projects ran and I played around with the samples. Unfortunately the physics cannon was broken, and Saad walked us through using the call stack and break points to debug the issue. The culprit was an overflowed buffer, and a quick increase in buffer size was sufficient to fix the error. It was a rocky start, but I can finally get to blowing things up with a physics cannon.
To begin, we got two links for BitBucket, a file-sharing program with an online server. In conjunction with TortoiseHG, we used the two links provided in the tutorial to clone repositories onto our local machine. There were two main files: The dependencies and the engine itself. The dependencies are all the files which the engine requires to run, while the engine executes the programs itself. After a lengthy process (everyone was cloning at the same time), we now had a client-side version of the engine and dependencies.
The next step was to install the PATH directories for the dependencies. We opened up a .bat file which performed this task. To make sure everything worked properly, we installed the Rapid Environment Editor, a program to easily manage file paths. The key is to ensure the environment path matches your local files. Everything is going well so far, but it soon went downhill.
The paths look good! |
With that done, a new build of the dependency solution proved more successful. The rest is simply repeating the process for the TwoLocEngine. After clicking the install.bat file, the path setup only took 2 seconds which seemed suspicious. Upon trying to build the engine, there were missing file and linker errors all over the place. Checking back to the REE, I found the path did not get set up properly for the engine. Fixing that, I tried again and got a corrupt library error. At this point I asked Saad for help and we fixed it, but the error was reoccurring.
Getting very frustrated, I deleted everything to do with TwoLoc and started from scratch. A couple blue screens and a Windows Recovery later, I re-cloned the repositories and rebuilt all the solutions. This time I checked every path before-hand, ensuring nothing went wrong along the way. Sure enough the 2nd attempt was much smoother and I could actually try out some of the engine samples.
Once in the engine, you must first select a project and set it as the Startup Project. This means it will open all pertinent files for only that project when you try to debug. One final step is to copy the linker directories into the debug working directory. This makes sure all the files end up in the proper location once you run the project.
Yay! The projects ran and I played around with the samples. Unfortunately the physics cannon was broken, and Saad walked us through using the call stack and break points to debug the issue. The culprit was an overflowed buffer, and a quick increase in buffer size was sufficient to fix the error. It was a rocky start, but I can finally get to blowing things up with a physics cannon.
Monday, September 9
Hero Engine: An Overview
To start this round of blogging, I will begin with a quick look into an engine several classmates and I have been working with over the summer break. A group of us have been working on a multiplayer game called They Stole My Sheep. It has been designed to not take itself too seriously while tuning the game play to promote a competitive atmosphere. We chose to use Hero Engine because of its relatively easy to use interface and multiplayer capabilities.
Hero Engine features a full graphical UI as well as a script editor, similar to how Unity is displayed. The Engine comes with several pre-loaded base scripts which the user must adapt to work with the game they are creating. The game world itself is constructed through the use of a height map editor and various terrain tools such as Speed Tree. These features combined with importing custom characters and models, you are able to build a fully functioning game with prominent multiplayer features. One such example is the MMO Star Wars: The Old Republic which was released at the end of 2011 with positive reviews.
The engine contains two main programs: Hero Blade, which runs the scripts and houses the game world, and the Repository Browser, which enables easy transferring of files from your computer or shared network storage over to the server-side storage of Hero Engine. Both of these systems form the basis of Hero Engine and creates an efficient pipeline to gets assets into the developer's game world.
Within Hero Blade, several key systems provide the functionality of the engine. These include: Terrain Editor, HeroScript Editor, physics tools, and editors for lighting, GUI, post-processing, water, and particles. In addition it provides helpful error messages in the chat panel to accelerate the bug fixing process. The Repository Browser primarily syncs files to and from the client and server.
After learning more about the nuances of Hero Engine, we have come across many of its pros and cons. While it does boast many different networking capabilities, we have found that some of its functions are tricky to use effectively, let alone find them. This steep learning curve is also hindered by the low level of documentation for Hero Engine. There is a wiki for the engine but it is not very comprehensive and the forums often don't have the concrete answers we are looking for.
With all that said, Hero Engine has provided us with handy world editing tools with which we rapidly set up a full game level. The process to set up the art pipeline takes a few minutes, but after that the artists can sync files to the server with ease. The asset pipeline has also been very beneficial for the artists, as any model can be transferred and updated in real-time while the engine is running, allowing for immediate feedback in the game world.
Though our team has been presented with many challenges, we are determined to unlock the potential of Hero Engine and fully utilize all its built-in features. I will continue to add updates of our progress as we continually build on the game world we created. Hopefully we will soon have a game play video to showcase and let everyone see what you can accomplish with Hero Engine.
Hero Engine features a full graphical UI as well as a script editor, similar to how Unity is displayed. The Engine comes with several pre-loaded base scripts which the user must adapt to work with the game they are creating. The game world itself is constructed through the use of a height map editor and various terrain tools such as Speed Tree. These features combined with importing custom characters and models, you are able to build a fully functioning game with prominent multiplayer features. One such example is the MMO Star Wars: The Old Republic which was released at the end of 2011 with positive reviews.
HeroBlade program showing a small portion of our game world. |
The engine contains two main programs: Hero Blade, which runs the scripts and houses the game world, and the Repository Browser, which enables easy transferring of files from your computer or shared network storage over to the server-side storage of Hero Engine. Both of these systems form the basis of Hero Engine and creates an efficient pipeline to gets assets into the developer's game world.
Within Hero Blade, several key systems provide the functionality of the engine. These include: Terrain Editor, HeroScript Editor, physics tools, and editors for lighting, GUI, post-processing, water, and particles. In addition it provides helpful error messages in the chat panel to accelerate the bug fixing process. The Repository Browser primarily syncs files to and from the client and server.
Repository Browser has synced three files to the Hero Engine server. |
After learning more about the nuances of Hero Engine, we have come across many of its pros and cons. While it does boast many different networking capabilities, we have found that some of its functions are tricky to use effectively, let alone find them. This steep learning curve is also hindered by the low level of documentation for Hero Engine. There is a wiki for the engine but it is not very comprehensive and the forums often don't have the concrete answers we are looking for.
With all that said, Hero Engine has provided us with handy world editing tools with which we rapidly set up a full game level. The process to set up the art pipeline takes a few minutes, but after that the artists can sync files to the server with ease. The asset pipeline has also been very beneficial for the artists, as any model can be transferred and updated in real-time while the engine is running, allowing for immediate feedback in the game world.
Though our team has been presented with many challenges, we are determined to unlock the potential of Hero Engine and fully utilize all its built-in features. I will continue to add updates of our progress as we continually build on the game world we created. Hopefully we will soon have a game play video to showcase and let everyone see what you can accomplish with Hero Engine.
Subscribe to:
Posts (Atom)