It’s been a while since I posted anything to this blog. Here I am trying to revive it again.

This time, I do have some good news, Last month I participated in the July VIP Teach Challenge Hosted over skillshare. skillshare it’s a great platform for everyone to talk about any little skill that they may have. Some people take it to the extreme and do really professional courses over there.

I’ve been doing videotutorials for game development using Game Maker for over seven years now, mostly over youtube and in spanish. This experience of doing a course over a topic that is not my main strength and also in another language really proved to be quite the challenge.

My first course over Skillshare

I chose to teach about pixelart this time. I haven’t been doing it as much as game programming but I’ve been improving quite a bit in the last year, so I felt like I should share some of my experience.

All the styles and used for the course

The whole course is developed using aseprite and a graphics display tablet (although a mouse is also a valid tool). The main idea is to draw an original character using 5 different styles: 4 from other videogames, I chose: Sonic, Metal Slug, Pokemon and Blasphemous. The final one is a drawing using our own style applying what we have learned over the rest of the course.

After this, we draw them again in different poses. I chose a site called quickposes to select some interesting pictures from people to use as a reference.

Drawing my original character in the metal slug style 

I will continue to release new courses, mainly in pixelart but I’m pretty sure that I will switch to teach game programming at some point as well.

If you are interested in learning how to do pixelart, you can check my course over skillshare and also get a 2 month free trial for this or any other course over the platform using this referal code.

https://skl.sh/2vmeyji

Thanks for stopping and I’ll see you next time.

Welcome back. Last time we did a quick introduction to Game Maker 3D using Game Maker Studio 2. Before we can continue to create 3D objects and the rest of the stuffs, we need to discuss about shaders.

(If you missed part 1 of the tutorials, you can read it here)

First of all shaders are a whole topic on their own. So this will be just an overview of how shaders work and what sort of things we may encounter when working with them using GMS2

Default shaders used by GMS2
This is the shader by default used by GMS2

By default, GMS2 creates a shader using a language very similar to GLSL ES. You can follow the official specifications to get a whole overview of the language.

You can also change the language to GLSL and HLSL 11 If you are targeting a specific platform. Because GLSL ES is supported by all platforms and it is also the default selection by GMS, this is the language that we are going to use.

Types of variables

GLSL ES is a strongly-typed language, this means that all the variables must have a type that specifies what kind of data it can handle.

Definition of a variable in GLSL
The order is: type variable = value;

There are several types and modifiers you can apply to it. This is a list of the basic types, but please refer to the official documentation for the full explanation.

To put it in simple words, there are just a few types that you need to think about:

  • Your basic variables from all languages:
    • bool – for conditionals.
    • int – for integer numbers.
    • float – for numbers with a floating-point.
  • Vectors – they can be of different dimensions:
    • 2 dimentional (x,y)
    • 3 dimentional (x,y,z)
    • 4 dimentional (x,y,z,w)
    • They can also have different accesors for when you are dealing with different types of data.
      • The general use accesors are: (x,y,z,w).
      • If you are dealing with colors you can use: (r,g,b,a).
      • If you are dealing with texture data then: (s,t,p,q).
      • so basically x==r==s and y==g==t and so on.
    • They can be of different types depending on the first letter:
      • vec is used for float vectors.
      • ivec is used for integer vectors.
      • bvec is used for boolean vectors.
  • Matrices – The basic usage is to represent things like transformations and projections. They can also have many dimensions.
  • Samplers – Used for texture data. They can also have many dimensions.

To get a full overview of how you can use vectors and matrices you can see a tutorial in lineal algebra. This tutorial from wolfire explains how to use lineal algebra for game development: Link to his blog

Vertex and Fragment Shaders

First, there are 2 types of scripts that we have to work on for every shader.

Vertex Shader

Default Vertex Shader
The default vertex shader created by GMS

The vertex shader has 2 main tasks:

  • Set the position of the geometry to be displayed in the screen. This is achieved by multiplying the geometry data with the object translation and camera projection (we’ll look at this in the next tutorial). The result must be put in gl_Position
  • Pass data to the fragment shader. This data is interpolated between vertices. For example: if one vertex is red and the one next to is green, then it will be interpolate between red and green having yellow in  the middle (which is why our triangle in the last tutorial displays many colors by only having red, green and blue assigned to the vertices).

Fragment Shader

Default Fragment Shader
The default fragment shader created by GMS

The fragment shader has only one task and that is to say which color should a pixel in the screen be, this is done by combining different techniques such as: reading texture data, calculating lightning and shadow information and putting it into the gl_FragColor.

Attributes

Definition of attributes in GLSL
Definition of an attribute.

We can only work with attributes in the vertex shader.

They represent the data from the vertices of the geometry. For example, the vertices position, colors, texture coordinates, joints weights, indices, among others. It can be used in creative ways to send different data to the GPU but those are the usual uses.

The trick is that you can only handle one vertex per operation. So for example, in our example from the previous tutorial we drew a triangle. It had 3 vertices but in the vertex shader we can only handle one vertex and we don’t have any sort of access or information about the other vertices (or even what vertex are we working in).

The reason for this is that the GPU needs to work on as many vertices in parallel as possible to complete the operation.

Attributes In Game Maker

Usually when you work with shaders you can create as many attributes as you want and you can name them however you want. In Game Maker instead we only have a bunch of attributes created and we cannot change their name.

This are the basic ones you can use (You can extend them to have more with a bit of trickery, but for now this would be enough).

GMS2 attributes
Basic attributes in GM Shaders

In order of lines they are for the vertices positions, the normals (for lightning calculations), colors (per vertex) and texture coordinates. We’ll work with them in the next tutorials.

Passing attributes in GMS2

The way we pass the attributes from Game Maker to the GPU is through a vertex format and a primitive. We already saw this in the previous tutorial:

Defining a Vertex Format

Create a vertex format
Defining a vertex format

First we need to define which attributes we are going to use. In the example above we said that we want to use vertex positions (in_Position) by calling the method vertex_format_add_position_3d() and vertex colors (in_Colour) by calling the method vertex_add_color(). We store the result in a variable to work later with it (global.VERTEX_FORMAT).

Defining a Primitive

Create a primitive in GMS2
Creating our triangle primitive

We’ll talk more about building primitives in it’s respective tutorial. But for now, the only thing you need to know is that when you build a primitive you have to follow the same order in which you specified the vertex format. You can see that first I pass a position vertex_position_3d and then I pass a color vertex_color and I always do it in that order.

You can see the constant use of the variable buffer this is the variable that contains the primitive.

Passing the Primitive to the Shader

Submitting a primitive to the shader
Drawing our triangle

To draw our primitive we only need to call the method vertex_submit and pass the variable that contains our primitive.

Uniforms

Defining a uniform in GLSL
Declaration of a uniform in GLSL

The next one to talk about are uniforms. We didn’t see them in our last example, but they are used to pass single data to the GPU. So for example, let’s say you want to color a character depending on how much health he has. If it’s full, then it displays the character with his normal colors, but if it’s almost dead then you may want to display him with a red color. You can pass a color through a uniform that specifies this level of damage.

uniform vec4 u_vHurt;

The uniforms are visible in both shaders, there is no limit on how you want to use them, as it is basically a variable you put a value in. The main usages that we are going to see however are: pass the projection and transformation matrices (We’ll do this in the next tutorial) and pass the image to texture our objects (we’ll do this in a later tutorial).

Passing Uniforms in GMS2

First you need to get the address of the uniform. This is done by a simple call in GML:

Getting a uniform address in GMS2
Obtaining an uniform address

Here we are obtaining the uniform u_vHurt that we defined in the shader shd3D and we are storing it’s address in the variable u_vHurt (It’s a good idea to name this variable the same as it is in your shader).

We only have to do this once, so we usually put this in a script at the beginning of the game (in our case it would be the script scrInitSystem).

Obtaining samplers (The uniforms for the texture) Is done different. But we don’t have to worry about it unless we are working with more than 1 texture.

Sending the Data to the Shader

Now that we have our uniform in a variable we need to send the data. Depending on what type of data we want to send, we’ll have to use a different function:

  • shader_set_uniform_f: This one is used to send floats into a uniform: shader_set_uniform_f(u_vHurt, hurtRed, hurtGreen, hurtBlue, hurtAlpha);
    • There is a variation called shader_set_uniform_f_array which is used to send an array of floats instead of the values themselves: shader_set_uniform_f_array(u_vHurt, hurtColor);
  • shader_set_uniform_i: The same behavior than the previous one but you can only send integer values with this. (You can also use the shader_set_uniform_i_array variation to send a array of values).
  • shader_set_uniform_matrix_array: This one is used to send a matrix through a array: shader_set_uniform_matrix_array(u_mProjection, projectionArray); It also has a version without the _array part but it has a different behavior left for the constants of GMS2, we are not going to use that version though.

Varyings

The last thing we need to talk about are the varyings.

Defining a varying in GLSL
Definition of a varying

They are defined in the shaders themselves so you don’t pass data directly to it.

They need to be included in both the vertex and fragment shaders. And must be called exactly the same with the same type as well.

They are used to interpolate values between vertices (usually passed through an attribute). Reviewing our example of the triangle again. We have 3 vertices, and each vertex has a color and a position assigned to it. We used a varying to interpolate the colors. That’s why it changes from one vertex being red, going slowly to yellow and finally getting green in the next vertex

Displaying a triangle with color interpolation
In the middle it gets gray because all the colors (RGB) are the same value there

Passing a Varying from the Vertex to the Fragment shader

Using varyings in GLSL
We are interpolating the colors in this example

First you can see we defined the varying vec4 v_Colour; Then inside the main method. In line 11, we are defining which value is this vertex going to have (one is red, the other is green, the final one is blue) by assigning the varying (v_Colour) with the value obtained from the attribute (in_Colour).

Receiving the Varying in the Fragment Shader

Reciving varyings in a fragment shader
Receiving the varying

In the first line you can see the same definition of the varying vec4 v_Colour; Same type, same name.

In this step the variable is already interpolated. We may have a red value, a yellow, a green or a gray.

The final step is in line 5 where we output that color to the screen (gl_FragColor).

Wrapping up

There is still a lot to be written about shaders. As I said at the beginning they are a full topic on their own. But with this little info we can start sending data to the GPU and have all sort of interesting effects going on.

This is going to be all for this tutorial. In the next one we are going to talk about the projection and transformation matrix.

Thanks for stopping by.

It’s been a while since I participated in any jam. I’ve been seeing a lot of good game jams hosted lately such as the a game by it’s cover jam and the annual js13k and I’d love to participate in those as well but I don’t really have the time or the idea to do something.

So instead, this time I will participate in the demake jam hosted over itch.io. It kind of made me realize of how few games I know that could be demaked, most of the games I like are franchises that already come from the 80s and 90s, so the demake would be more of going to those origins.

I was able to find a good idea for a small game, I’m not sure how much of the game will I be ale to demake (If any) considering that I only have 9 days to do anything.

Demaking Hitman

My entry for the demake jam is the original Hitman. It’s one of my favorites and I have lots of memories playing this as a child.

The idea is to make this game using a NES palette to make it look more retro.

I didn’t have much time yesterday to do anything but at least I was able to sketch the first screen. The name will probably change though to something more appealing.

Gunman Title Screen for the demake jam entry
I will probably change the name to GUNMAN instead

There is a lot of people participating in the demake jam. I’m really interested in seeing how many people complete their games and what types of games will they demake.

I’m making it using typescript and a custom engine, I was planning on using Game Maker Studio 1.4, since I have the license for exporting to HTML5. But my PC is having lots of issues running it.

That’s all for today, tomorrow I’ll post the next tutorial for Game Maker 3D.

Thanks for stopping by.

I always wanted to talk about 3D development using game maker but I never really found a format that I liked. I made some small courses in spanish over Youtube but I never was quite happy with how it was scalling, so this is a new attempt to bring this topic to the people.

Now, I know, why would you even attempt to create 3D games with Game Maker when tools like Unity, Unreal and Godot are much better for this task and have very good license deals?

Well, I don’t really have an answer to that, those engines are obviously much better than Game Maker will ever be for 3D. with that being said, there is still knowledge to be gained with this experiment.

While Unity and the others are really powerful, they remove a big chuck of the pipeline involved in the 3D rendering. This is great for businesses obviously, since you want to use the shortest path possibble to release your product, but if you want to improve your skills as a programmer then you should really tackle this and many other difficult tasks.

This mini course is just going to talk about how to set up a 3D dev environment using Game Maker Studio 2, any tutorial beyond that will be made in a general OpenGL/WebGL engine.

You don’t need any previous experience doing 3D, but you should be confortable using the Game Maker Engine before continuing.

CREATING OUR FIRST TRIANGLE IN GAME MAKER

It may sound like a simple task, but this is the most important step. It happens more often than not that when you set up your 3D environment you don’t get to see anything you drew to the screen. Debugging an OpenGL application can be a real pain. That’s why it’s important to make sure you can draw things.

You can find a Github repo for this course in at the end of this post.

Let’s start by creating our assets in the project. Open Game Maker Studio 2, create a new project and add this files:

  • A script called scrInitSystem, this one will be used to initialize all the global variables needed for this project.
  • A shader called shd3D, Since Game Maker uses a 2D shader by default, we will create our own shader.
  • An object called objSystem, this is the main object for the game, it’s going to handle all the 3D functions.
  • An object called objTriangle, This is our object to be displayed.
  • room should be added by default by the engine.

By now, your hierarchy should look like this:

The hierarchy of the project showing all the assets created so far.
This is how your project should look so far

Shd3D

Let’s start by modifying our shader. Now, if you don’t know how to use shaders, they are a whole topic on their own. We will discuss them on time but for now I just want you to copy what I’m going to show you.

This is how a shader looks by default when you create a new one:

Default shader created by Game Maker
Vertex Shader of a new Shader

What we are going to do is to eliminate all of the unnecesary data and leave only the attributes needed to place our triangle, and then add color to it.

Make sure you are on the first tab shd3D.vsh this is the vertex shader, and paste this code.

Custom Vertex shader that receives a color and a position
Vertex shader

Then, go to the second tab shd3D.fsh which is the fragment shader, and add this.

Custom Fragment shader that draws a color to the screen
Fragment shader

As I said, don’t worry too much about what we just added, because we will talk about shaders in detail in another post. Basically what this shader does is: It receives 2 types of parameters: a position, and a color. Then, it interpolates between the colors from vertex (position) to vertex. Finally it draws it to the screen.

scrInitSystem

Let’s continue with this script, the objective is to initialize all the global variables that we are going to need, this script should be executed at the beginning of the game by the objSystem. This is how it should look:

Initialize the vertex format to be used in the 3D application.
scrInitSytem

Since we added a custom shader that receives a position and a color, we need to make sure that we are going to send that same data. So what this script does is to create a new format (vertex format) and then we tell it that we are going to first send a position (vertex_format_add_position_3d) and then a color (vertex_format_add_color). This order is extremely important because later when adding our primitives (the triangle in this case) we need to create it using the same order.

objSystem

This is the object that will control how things are going to be draw in the future. For now it is a really simple object.

Add the create event for this object and add this lines:

Event Create for the objSystem
Event Create for the objSystem

The first line calls the scrInitSystem script and the second line creates the objTriangle. Nothing more.

objTriangle

Finally, the star of this tutorial. The objTriangle Is going to create the primitive and then it will send it to the shader where it will be draw.

Add the create event and add the next content:

Event Create for the objTriangle
Event Create for the objTriangle

Lines 2 and 3 are used to tell Game Maker that we are going to build a primitive. Note the use of global.VERTEX_FORMAT this is the format we created in the script scrInitSystem. Remember that it has an order that we need to respect.

The lines 6-7, 10-11 and 14-15 are used to create the geometry of the triangle, we will talk more about this in another post, but you can see that we kept the same order always: first position_3d, then color

The last line is to indicate that we are done building our object.

Now, let’s move onto the draw event

Draw Event for the objTriangle
Draw event for the objTriangle

First, in line 2: we are setting our custom shader to be used.

After that in line 4: we are submitting our geometry (created in the create event of the objTriangle) to the shader, (don’t worry about the pr_trianglelist and the -1, we’ll talk about that in the primitives tutorial).

Finally we reset the shader to use the default of Game Maker, this is so to avoid any conflicts with other things being draw.

Adding things to the Room

The last thing left to do is to add the instances to the room. You only need to add the objSystem since that object is the one that creates the triangle after the global variables are initialiazed.

Objects added to the room
Note that the only object added is the objSystem

And that’s it! if everything was added correctly, this is how your game should look when you run it:

The result of this tutorial
So  awesome!

It may not look like much, but as I said, this is the most important step. The next tutorials will talk more in focus about specific topics: Matrices, Shaders, Cameras, Primitives, etc.

Congrats if you made it this far! If something maybe didn’t work then you can ask me in the comments section or you can download the source code for this and the other tutorials:

https://github.com/jucarave/tutorial3DGMS2

Continue now to part 2

Thanks for stopping by and have a nice day!

I’ve been following this new dawn of VR since John Carmack started working on Doom 3 VR with Oculus but I never really saw VR as a gaming platform mainly because of the control schema, several companies started creating some peripherals to create some extra degree of inmersion but I never liked this so much because you’ll have to buy a ton of stuff that are not guarantee to work with all the VR games.

I always saw VR as a platform for experiences, like watching a movie or visiting a place or even for social share, but with that said I never truly tested VR until recently.

I got my hands on a really cheap VR headset for mobile (just 2 lenses on a plastic case) and tried some mobile games, while most of them weren’t any interesting I did see some potential for this platform and this kind of games, and after watching some youtubers play some games I found some issues with how this games are presented to people.

So I decided to give it a try:

Although I’m building it specifically for VR, I’m also adding multiple options like playing the game with or without a gamepad, in VR or not and if you are not playing in VR then you can also play with touch controls as in any other first person mobile game, the most difficult thing right now is trying to balance the game through the multiple type of input but I think it can be achieved.

There is no much to say about the game right now, I’m still dancing around some concepts for games that I have, there are several challenges in this project since this is not only my first VR game but also my first game made in Unity but so far is been a nice ride.

Greetings.

So, I’ve been busy the last month learning about other methods for rendering the voxels, specially on the GPU, I came across a method called raymarching which use signed distance functions (SDF) to create scenes, it is really powerfull and easy to write, you can create shadows, reflections and other effects with a few lines of code:

In raymarching you use a traditional ray casting method only that instead of advancing at a constant step you advance by searching the minimum distance to a solid in the whole scene:


Image by celarek.at

The main problem with this is that you need to check the distance to all the objects in the scene to determinate how much you need to move, this is ok for 2 or 3 objects in the scene but not for a real (game) scene, you need a way to obtain just the object that falls in the ray direction, you can achieve this by many ways, the simplest one being using a raycast algorithm (Like this fast voxel traversal by Amanatides and Woo) in a grid to obtain the objects and then perform raymarching to get the pixel and other properties of the object. This improves the rendering time and let you have more complex scenes.

This works pretty well but since I need only small voxels (cubes) and no the others SDF so I wrote a small program in C++ to do raycasting on a voxels grid, this image contains a 64x64x64 grid but it supports bigger scenes at +60fps:


So I’m not going to continue what I had in JS using the greedy merging geometry (ala minecraft) but I will continue in C++, I have some issues right now, the main being sending all the voxels data to the GPU is rather slow, I tried writting a software renderer solution but with no luck so far due to speed issues, I’m also reading about OpenCL and CUDA but I’m not sure yet if I should use those. One thing that I’m sure is that I need to migrate the data from a raw array of voxels to a sparse voxels octree.

Anyways, this is still the beginning of this project (and this is by far the hardest thing I’ve ever tried), so I’m feeling a little patient and happy for now 🙂

Greetings.

Progress is going to be a little slower for the next month because I’m taking a break, I’m still working on this whenever I have free time but for the next month or two I’m going to be on other subjects.

This week I implemented the frustum culling to avoid rendering of parts of the world that are not visible, it test against an octree so it is really easy to determinate that large areas are not going to be render.

I also been working a little bit more on optimizing the rendering algorithm, it is better now but there is still work to do on that field.

I did a little research on how to generate a terrain using the perlin noise algorithm, this article on devmag.org was really usefull on this subject, I already did a basic implementation but I have to work more on it to achieve a good result:

I’m going to start working a little on the collision part, mostly allowing to create and remove voxels with the mouse and after that I will work on the lighting system.

Greetings.

So this last week was a little slow for me, I couldn’t work on all the features that I wanted but I did manage to improve the greedy meshing algorithm that I was using.

It works pretty fast now even when removing multiple blocks on several chunks, one of the things I did was to change the way I was doing the merge, in the previous version I had to render both faces of each triangle because of situations like this:

When two voxels shared the same face but the face needed to look in both directions at the same time, the solution was simply to split the face in to two faces:

With this done now I can activate the culling and render each triangle only once.

I also Implemented the octrees to distribute the chunks and be able to do culling (still pending):

I hope to finish the frustum culling this week and maybe do some implementation for occlusion culling.

Greetings.

I’ve been busy last week trying to implement a greedy meshing algorithm based on the article by Mikola Lysenko on the 0FPS blog, It was a bit more tricky than I expected (I still have to optimize it more) but the basic theory for this is pretty straight forward:

This allows me to merge multiple blocks and hide the triangles that are not visible

So far I think it is working fine, it takes quite some time right now (about 50ms) to regenerate the chunk of 16x16x16, which would be about 4 frames in a 60fps application, I will try to reduce this number as much as I can.

Next thing on the list is trying to implement octrees to later perform culling of the chunks.

Greetings.

I’ve been busy the last couple of months: learning a bit of pixelart and story writting and most recently I started to research about voxels and decided to give it a try by myself:

I’m coding it in JavaScript using WebGL, it is more limited that if I just do it on a native environment but I want to see how far can I take this on the web, I’ve seen projects like voxel js that have done a pretty good job usign threeJS.

For now there is no much to see, I’ve been working on this for a couple of weeks now laying down all the basic structures, chunk management an geometry reconstruction:

I’m not targeting to do a game with this but I do want to publish it later as a library along with several examples that I will be developing during the development time; I’ve already have an example of several voxels (around 2M) and it worked fine but it still needs optimization in order to run properly:

That would be all for now at least on this topic of voxels.