GLTF: Highway from 3D pipeline to engine

So, I’m developing a 3D engine. We want to load models, and I’ve heard great things about the GLTF format, but never used it myself. I know it can keep all these different things in it, such as meshes, bones, skinning, animations, morph targets, and even cameras! This would be my first attempt at using it, and I’m hoping for a good time! The first couple of hours spent exploring brought me to the gltf crate and a cool blogpost about loading GLTF models and even animating them in Rust. I’m using sdl2 and native opengl instead of wgpu, but the loading part would mostly be the same. After going through the blogpost, however, I didn’t quite want to handle it that way. It felt too ad-hoc — I had thought that GLTF was a standardized format, and was almost ready to be disappointed that this was the best we had. Before actually getting to that point, though, I searched for the standard itself, and wow!

Without a doubt, some of the best, most dense and precise pieces of documentation I have ever seen. Click the image for a full size PDF document.

That’s also just the primer, here’s the full spec, so let’s go through what this is actually like.

How I learned to stop struggling and let the format do it

The main exciting thing about GLTF is this: it’s a JSON (thus human readable) format that uses indexing to escape data nesting. There’s only one trick you need to learn here: to get anything, you need to go way up to the top of the document and index into a different array. Might sound vague, so here’s an example:

You’ve successfully found your scene, that part is top-level and it says:

{ ..., scene: 0, ... }

This doesn’t mean that the scene is empty, but rather that to get the scene we are referencing here, we need to go to the scenes array and index it under 0.

There, we find something like:

{ 
    ...,

    scenes: [
        { nodes: [ 0, 1, 2 ] },
    ],

    ...
}

Scenes here are an array, and the 0th object contains an array of nodes. You could already guess: we need to find the top-level nodes array and index those positions.

{
    ...,

    nodes: [
        { mesh: 0, material: 15, ... },
        { mesh: 2, material: 12, ... },
        { mesh: 7, material: 3, ... },
    ],

    ...
}

We now repeat this ad-nauseam – we index the meshes array, and the materials array, cache along the way if we want to, load things in, etc. This is quite familiar, although uniquely flat. No biggie. This kind of format guarantees we actually don’t need complicated code to read it: if we separate functionalities at the array indexing level, we’re good to go! The moment GLTF rocked my world truly is the following part: once you get to the bottom levels, where you can query information about vertices, normals, and textures, you don’t have to do what you would usually do. Let’s go over that really quickly, just to compare.

Usually, you’d create a structure and fill it with information from these separate buffers (like what you can see here). This information is taken from multiple buffers and collected into to an array of structs with a piece of each of the buffers. We can then use that array for interleaved rendering, defining the layout of our attributes (some words on that in my previous blogpost) ourselves and making sure it coordinates nicely with what we read from the source. So, we would have do follow these steps:

  1. Read GLTF and fetch all the way to the information about a single mesh and its vertex data
  2. Make an array of your own MyVertex structure that holds all the important info about the vertices (position, normals, textures, what have you)
  3. Go through the positions array and start filling your vertex structure’s position information
  4. Go through the normals array and start filling your vertex structure’s normal information
  5. Go through the texture coordinate array and start filling your vertex structure’s texture information
  6. Pack the output Vec<MyVertex>, calculate the offsets and strides, and where every attribute will be
  7. Tell OpenGL about your attribute layout, preparing it to read your vector
  8. Send the vector to the GPU to be rendered

The moments where I doubted that we were collectively missing something were actually abundant – steps 3 through 7 above. I already have an array of positions; can’t I just use that? I also have an array of normals and one for texture coordinates! Can I just use that? Of course you could but should you? Can you be sure that there’s not some extra info in there you don’t need? Are you sure you can trust a JSON format to be padded and tight? Above all else, is it going to be repeatable — if I get 3 files exported in this format, is reading the arrays directly going to work for all of them, or will I have to do some manual padding and cutting? At that point, what Ryosuke did would make sense completely, but it would also mean that it’s a bit of a sadder world we live in to have a format that is so half-assed.

Luckily, we don’t. When you get down to it, GLTF prepares data for you. It’s actually very careful to do so. Look at this:

This hierarchy defines the BLOB we want to be reading from (either by URI or directly as binary data to load as-is), the range that we need to bind our buffers to, the offset and stride we want to apply to it, and at the end – what kind of information this is, and how our graphics library can read it. Steps 3, 4, 5 and 7 have been made for us, they are in the file. Step 6 disappears on its own because it is no longer needed. We just need to rely on the standard and implement a reader. The GLTF documentation tells us that itself! Look at the image above and read the text to the right. It’s meant to be like this. This is a peak we don’t often get to climb and I’m going to cherish it.

I might have got too excited there and jumped over some details, like what a buffer view or accessor is. Let’s take it from the top: we know your machine needs a byte array. That’s what a buffer is. A buffer view adds details related to a specific use of a buffer – byte offset, byte stride, things like that. A view is basically a specific look at the data from a single perspective. For example, a view on the position data, or a view on the normals. Lastly, above the buffer view is its accessor that tells us how to access that view: data can be read in many ways, and if you have 16 bytes, it can be 2×8, 4×4 or 8×2. Further more, any 4 bytes can be an int or a float, or… something else! Accessors hold this kind of funk in them. We’ll make some helper functions to extract data from these layers, because we need a bit of everything. Along the way, we need some type to size converters, all in tune with the Khronos specification.


fn accessor_to_type_and_size(ty: &AccessorType) -> i32 {
    match ty {
        mugltf::AccessorType::Scalar => 1,
        mugltf::AccessorType::Vec2 => 2,
        mugltf::AccessorType::Vec3 => 3,
        mugltf::AccessorType::Vec4 => 4,
        mugltf::AccessorType::Mat2 => 8,
        mugltf::AccessorType::Mat3 => 12,
        mugltf::AccessorType::Mat4 => 16,
    }
}

pub struct BufferViewPayload<'a> {
    buffer_view: &'a BufferView,
    range: Range<usize>,
    data: Vec<u8>,
}

fn load_buffer_view<'a>(
    asset: &'a GltfAsset<'_>,
    buffer_view_index: usize,
) -> Result<BufferViewPayload<'a>, String> {
    let buffer_view = asset.gltf.buffer_views.get(buffer_view_index).unwrap();
    let buffer = asset.gltf.buffers.get(buffer_view.buffer).unwrap();
    let start = buffer_view.byte_offset;
    let end = buffer_view.byte_offset + buffer_view.byte_length;

    let bin = match decode(buffer.uri) {
        Ok(data) => data,
        Err(_) => panic!("Couldn't read uri"),
    };

    let data = bin.get(start..end).unwrap().to_vec();
    Ok((buffer_view, start..end, data).into())
}

pub struct AccessorPayload<'a> {
    buffer_view: &'a BufferView,
    range: Range<usize>,
    data: Vec<u8>,
    component_type: AccessorComponentType,
    count: usize,
}

fn get_gl_component_type(component_type: AccessorComponentType) -> u32 {
    match component_type {
        AccessorComponentType::Byte => gl::BYTE,
        AccessorComponentType::UnsignedByte => gl::UNSIGNED_BYTE,
        AccessorComponentType::Short => gl::SHORT,
        AccessorComponentType::UnsignedShort => gl::UNSIGNED_SHORT,
        AccessorComponentType::UnsignedInt => gl::UNSIGNED_INT,
        AccessorComponentType::Float => gl::FLOAT,
    }
}

fn load_accessor<'a>(
    asset: &'a GltfAsset<'_>,
    accessor_index: usize,
) -> Result<AccessorPayload<'a>, String> {
    let acc = asset.gltf.accessors.get(accessor_index).unwrap();

    if let Some(buffer_view_index) = acc.buffer_view {
        let buffer_view_payload = load_buffer_view(asset, buffer_view_index)?;
        Ok((buffer_view_payload, acc.component_type, acc.count).into())
    } else {
        Err("No buffer view index found".to_string())
    }
}

Building an OpenGL loader

I’m using the minimal mugltf crate to get this data into Rust, but you could do the same even by reading the JSON directly. As you can see above, this is just a bit more handy, as it already has some named-and-checked parts complete. This crate gives us the GltfAsset<'a> struct which looks like this:

#[derive(Debug)]
#[repr(C)]
pub struct GltfAsset<'a, ImageData = (Vec<u8>, Extent2D)> {
    pub gltf: Gltf,
    pub bin: Cow<'a, [u8]>,
    pub buffers: Vec<Vec<u8>>,
    pub images: Vec<ImageData>,
}

All the indices live in the gltf part, and the data is in the bin and buffers parts. We’re going to build around this but also extract the parts we need – for starters, meshes and materials. So let’s make our storage:

#[derive(Debug)]
pub struct GltfStorage<'a> {
    pub storage: GltfAsset<'a>,
    pub meshes: HashMap<String, GltfMesh>,
    pub materials: HashMap<u32, GltfMaterial>,
}

Meshes consist of primitives, so we’ll just extract that:

#[derive(Debug)]
pub struct GltfMesh {
    pub primitives: Vec<GltfPrimitive>,
}

Primitives, for what it’s worth, are quite close to OpenGL:

#[derive(Debug)]
pub struct GltfPrimitive {
    pub vao: u32,
    pub ibo: Option<GltfIndexBuffer>,
    pub count: usize,
    pub vbos: HashMap<String, u32>,
    pub material: Option<u32>,
}

Let’s get the dirty unsafe OpenGL things out of the way first, so that we don’t have to later on. We have to deal with initializing, binding, and unbinding for VAOs (vertex array objects), VBOs (buffer array objects) and EBOs (element buffer array objects).

  1. VBOs (Vertex Buffer Objects) are vertex buffers, literal arrays containing some vertex information. They can hold ALL of the information, or just a feature (position, normal, texture coordinate) of it. They define the size, type, and layout of data within themselves.
  2. VAOs (Vertex Array Objects) store information about a vertex. I’d honestly have called it Vertex Attribute Object, because that’s what it does: it keeps info on attributes and what’s bound for which one of them. Once you bind a VAO, you can bind VBOs afterwards and — this is very important — that by itself does nothing at all. You do need to do it, but VBOs don’t bind themselves to the bound VAO until you tell OpenGL what to bind them to. A VBO is bound to an attribute, so we need to point them at the right thing.
  3. EBOs or IBOs (Element Buffer Objects or Index Buffer Objects) are similar to VBOs in that they keep information, but this time not direct vertex information, but rather information about the indexing used to render more complex shapes (such as triangles). Similarly to VBOs, EBOs aren’t instantly connected to the VAO when they are bound, you need to use a specific function called VertexArrayElementBuffer to set this in motion. ChatGPT won’t tell you about this function if you ask it why the VAO didn’t bind the EBO, so you’ll think you’re the failure. It really isn’t you.

So why do we do all of this? VBOs and EBOs make sense, they hold the data and send it to the GPU. But VAOs? Why do we need to map things to VAOs when we can just tell the shader where our buffers are? Imagine having a complex model and no VAOs. You have 6 VBOs and an EBO, and every time you render you need to set things up so that every VBO is told which attribute to talk to. Well, there’s your first problem: you need attributes, and thus you need a structure to hold them. This is, basically, a VAO. But furthermore, imagine we had some functions to do that instead of having a structure. You’d have to call 7 (6 VBOs and 1 EBO) to bind the arrays and 7 more to map them to the attributes. This is 14 lines to setup the model and it’s easy to mess up. With VAOs, it’s just one. You bind the VAO, it sets everything else up automagically.

With all that said and done, you understand what we need to do now. Here’s all of that in a single code listing:

fn new_vertex_array() -> GLuint {
    let mut vao: GLuint = 0;
    unsafe {
        gl::GenVertexArrays(1, &mut vao as *mut GLuint);
        gl::BindVertexArray(vao);
    }

    vao
}

fn bind_vertex_array(vao: u32) {
    unsafe { gl::BindVertexArray(vao); }
}

fn unbind_vertex_array() {
    unsafe { gl::BindVertexArray(0); }
}

fn new_array_buffer(data: &[u8]) -> gl::types::GLuint {
    let mut vbo: gl::types::GLuint = 0;

    unsafe {
        gl::GenBuffers(1, &mut vbo as *mut GLuint);
        gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
        gl::BufferData(
            gl::ARRAY_BUFFER,
            data.len() as isize,
            data.as_ptr() as *const gl::types::GLvoid,
            gl::STATIC_DRAW,
        );

    }

    vbo
}

fn unbind_array_buffer() {
    unsafe { gl::BindBuffer(gl::ARRAY_BUFFER, 0); }
}

fn new_element_buffer(data: &[u8]) -> gl::types::GLuint {
    let mut ebo: gl::types::GLuint = 0;

    unsafe {
        gl::GenBuffers(1, &mut ebo as *mut GLuint);
        gl::BindBuffer(gl::ELEMENT_ARRAY_BUFFER, ebo);
        gl::BufferData(
            gl::ELEMENT_ARRAY_BUFFER,
            data.len() as isize,
            data.as_ptr() as *const gl::types::GLvoid,
            gl::STATIC_DRAW,
        );
    }

    ebo
}

fn bind_element_buffer(vbo: u32) {
    unsafe { gl::BindBuffer(gl::ELEMENT_ARRAY_BUFFER, vbo); }
}

fn unbind_element_buffer() {
    unsafe { gl::BindBuffer(gl::ELEMENT_ARRAY_BUFFER, 0); }
}

Cool. Now that we have this done, we can start loading our storage:

pub fn load_gltf(name: &str) -> Result<GltfStorage, String> {
    let mut loader = mugltf::GltfResourceFileLoader::default();
    loader.set_path(".");

    let asset = smol::block_on(mugltf::GltfAsset::load(&loader, name, true))
        .map_err(|_| "Cannot load asset".to_string())?;

    let mut meshes = HashMap::new();
    // load meshes here

    let mut materials = HashMap::new();
    // load materials here

    Ok(GltfStorage {
        storage: asset,
        meshes,
        materials,
    })
}

We’re going to make load_* methods for each step in the hierarchy. We’ll start with looping through meshes:

    ...

    let mut meshes = HashMap::new();

    for scene in &asset.gltf.scenes {
        for node_index in &scene.nodes {
            if let Some((name, mesh)) = load_node(&asset, *node_index) {
                meshes.insert(name, mesh);
            }
        }
    }

    ...

We go through scenes (usually there’ll be only one anyway) and go through all the nodes in the scenes and call the load_node function on all of them. Our load functions will all take the asset itself (this is our gateway to the top level), as well as the index under which we’re accessing.

fn load_node(asset: &GltfAsset<'_>, node_index: usize)
    -> Option<(String, GltfMesh)> {
    
    let node = asset.gltf.nodes.get(node_index).unwrap();
    node.mesh.map(|mesh_index| (node.name.clone(), load_mesh(asset, mesh_index)))
}

Sometimes, they’ll be this simple – we just access and go a level deeper. In this case, nodes contain an optional mesh, and we’re only interested in the nodes that do for now. The load_mesh function is going to take care of the primitives that the mesh is made out of, constructing a VAO for each of them, followed by an optional index array (if the mesh is using indexing), and as many VBOs as we need. We don’t have to think about how many we need: GLTF is going to be very specific about this and tell us what we have. We just follow the instructions embedded in the document!

I’m filtering out the debugging calls, but you should have them. More info in that previous post I linked up above.

fn load_mesh(asset: &GltfAsset<'_>, mesh_index: usize) -> GltfMesh {
    let mesh = asset.gltf.meshes.get(mesh_index).unwrap();
    let mut parts = vec![];

    for primitive in &mesh.primitives {
        let vao = new_vertex_array();

        let (count, index_type, ibo) = 
            if let Some(indices_index) = primitive.indices {
                let accessor_payload = load_accessor
                    (asset, indices_index).unwrap();

                let index_type = get_gl_component_type
                    (accessor_payload.component_type);

                let ibo = new_element_buffer(&accessor_payload.data);
                unsafe { gl::VertexArrayElementBuffer(vao, ibo); }

                (accessor_payload.count, index_type, Some(ibo))
            } else {
                (0, 0, None)
            };

        let mut vbos = HashMap::new();
        for (attrib, accessor_index) in &primitive.attributes {
            let acc = asset.gltf.accessors.get(*accessor_index).unwrap();
            if let Ok(accessor_payload) = load_accessor(asset, *accessor_index) {
                let vbo = new_array_buffer(&accessor_payload.data);

                unsafe {
                    gl::EnableVertexAttribArray(*accessor_index as u32);
                    let size = accessor_to_type_and_size(&acc.ty);

                    gl::VertexAttribPointer(
                        *accessor_index as u32,
                        size,
                        get_gl_component_type(acc.component_type),
                        if acc.normalized { gl::TRUE } else { gl::FALSE },
                        accessor_payload.buffer_view.byte_stride as i32,
                        acc.byte_offset as *const _,
                    );
                }

                vbos.insert(attrib.clone(), vbo);
            }
        }

        unbind_vertex_array();

        let index_pair = ibo.map(|i| (index_type, i));
        parts.push(GltfPrimitive::new(vao, index_pair, count, vbos, mat))
    }

    GltfMesh { primitives: parts }
}

It’s a large function but it’s still pretty clear. In very short form, we make a VAO per primitive, read and bind the EBO to it, then go through all the attributes and read and bind a VBO per attribute to the VAO. Then we unbind the VAO, embellish some more info we’ll need later (rendering requires the count of triangles, so we pass that along), and return.

Making it pretty

Back in the load_gltf function, we’re now going to turn to materials. It’s similar to what we did with meshes, but we’ll go and load all the materials and not just the ones used by the meshes.

let mut materials = HashMap::new();

for (index, material) in &asset.gltf.materials.iter().enumerate().collect_vec() {
    let pbr = material.pbr_metallic_roughness.as_ref().unwrap();

    let mut material = GltfMaterial::new(&material.name);
    material.color = pbr.base_color_factor;

    if let Some(texture_source) = &pbr
        .base_color_texture.as_ref()
        .map(|tex| asset.gltf.textures.get(tex.index).unwrap().source.unwrap())
    {
        let s = asset.gltf.images.get(*texture_source).unwrap();

        if !s.uri.is_empty() {
            let img = stb_image::image::load(s.uri.as_str());
            match img {
                stb_image::image::LoadResult::Error(e) => error!("{:?}", e),
                stb_image::image::LoadResult::ImageU8(i) => {
                    let texture =
                        load_image(&i.data, 
                            i.width as i32, 
                            i.height as i32, 
                            i.depth as i32);

                    material.texture = Some(texture);
                }
                stb_image::image::LoadResult::ImageF32(_) => {
                    warn!("ImageF32 -- we don't know how to load.")
                }
            }
        } else {
            let buffer_view_payload = load_buffer_view(&asset, 
                s.buffer_view.unwrap())?;
            let img = stb_image::image::
                load_from_memory(&buffer_view_payload.data);

            match img {
                stb_image::image::LoadResult::Error(e) => error!("{:?}", e),
                stb_image::image::LoadResult::ImageU8(i) => {
                    let texture =
                        load_image(&i.data, 
                            i.width as i32, i.height as i32, 
                            i.depth as i32);
                    material.texture = Some(texture);
                }
                stb_image::image::LoadResult::ImageF32(_) => {
                    warn!("ImageF32 -- we don't know how to load.")
                }
            }
        }
    }

    materials.insert(*index as u32, material);
}

We’re using the stb_image crate here, it’s very simple and doesn’t ask for much. This part of the function takes care of loading the images in and then calls load_image which is more on the OpenGL side of things and passes the image into a shader-friendly form:

fn load_image(data: &Vec<u8>, width: i32, height: i32, depth: i32) -> u32 {
    let mut texture_id: gl::types::GLuint = 0;
    unsafe {
        gl::GenTextures(1, &mut texture_id);
        gl::BindTexture(gl::TEXTURE_2D, texture_id);
        gl::TexParameteri(gl::TEXTURE_2D, gl::TEXTURE_MIN_FILTER, gl::LINEAR as i32);
        gl::TexParameteri(gl::TEXTURE_2D, gl::TEXTURE_MAG_FILTER, gl::LINEAR as i32);
        gl::GenerateMipmap(gl::TEXTURE_2D);

        let depth_channels = if depth == 3 {
 gl::RGB
 } else {
 gl::RGBA
 } as i32;
        gl::TexImage2D(
            gl::TEXTURE_2D,
            0,
 depth_channels,
            width,
 height,
 0,
            depth_channels as u32,
            gl::UNSIGNED_BYTE,
            data.as_ptr() as *const GLvoid,
        );
        gl::BindTexture(gl::TEXTURE_2D, 0);
    }

    texture_id
}

With that, we’re done with the Rust side. Now we need to instantiate a model from storage:

#[derive(Debug)]
pub struct GltfMeshInstance<'a> {
    pub storage: Weak<GltfStorage<'a>>,
    pub mesh_name: String,
}

We’ll keep a weak reference to the storage to be able to get materials out of it. Mesh instances come from the storage, so we’ll add that functionality there:

impl<'a> GltfStorage<'a> {
    pub fn instantiate(&self, mesh_name: &str) -> Option<GltfMeshInstance<'a>> {
        let mesh_name = mesh_name.to_string();
        if self.meshes.get(&name).is_some() {
            return Some(GltfMeshInstance {
                storage: unsafe { Weak::from_raw(self) },
                mesh_name,
            });
        }

        None
    }
}

Lastly, we make a render function on the mesh instance, and we’ll handle both rendering with indexing and without:

impl<'a> GltfMeshInstance<'a> {
    pub fn render(&self, shader: &Shader) {
        let storage = self.storage.upgrade().unwrap();
        let mesh = storage.meshes.get(&self.mesh).unwrap();

        for primitive in &mesh.primitives {
            bind_vertex_array(primitive.vao);

            if let Some(mat_index) = primitive.material {
                let mat = storage.materials.get(&mat_index).unwrap();
                set_uniform_f4(shader, "diffuse", mat.color);

                if let Some(texture) = mat.texture {
                    unsafe {
                        gl::ActiveTexture(gl::TEXTURE0);

                        gl::BindTexture(gl::TEXTURE_2D, texture);

                    }
                }
            }

            unsafe {
                if let Some(index_buffer) = primitive.ibo {
                    gl::DrawElements(
                        gl::TRIANGLES,
                        primitive.count as i32,
                        index_buffer.index_type,
                        std::ptr::null::<GLvoid>(),
                    );

                } else {
                    gl::DrawArrays(gl::TRIANGLES, 0, primitive.count as i32);

                }
            }

            if primitive.material.is_some() {
                unsafe {
                    gl::BindTexture(gl::TEXTURE_2D, 0);

                }
            }

            unbind_vertex_array();
        }
    }
}

In short, loop through primitives, bind VAO and texture, call a draw function, unbind VAO, and that’s it. To get this working, we need a compatible shader. As the vertex shader doesn’t do anything serious except passing the position, normal, and texture coordinates, here’s the pixel shader and some simple lighting:

#version 430 core

precision highp float;

out vec4 pixel_color;

in vec3 out_pos;
in vec3 out_normal;
in vec2 out_tex_coords;

uniform vec4 u_diffuse;
uniform sampler2D u_texture;

void main() {
    vec3 light_color = vec3(0.2, 0.2, 0.4);
    vec3 ambient = light_color;

    vec3 light = vec3(5.0, 10.0, 1.0);
    vec3 light_dir = normalize(light);
    float light_apply = max(dot(out_normal, light_dir), 0.0);

    vec3 texture = texture(u_texture, out_tex_coords).rgb;

    pixel_color = vec4(light_apply * (ambient + texture), 1.0);
}

If you don’t want realistic lighting, we can replace this with a very simple pixel_color = texture; and be done with it! Just for completion, here’s the vertex shader:

#version 430 core

layout(location = 0) in vec3 in_position;
layout(location = 1) in vec3 in_normal;
layout(location = 2) in vec2 in_tex_coords;

uniform mat4 MVP;

out vec3 out_pos;
out vec3 out_normal;
out vec2 out_tex_coords;

void main() {
    out_pos = in_position;
    out_normal = in_normal;
    out_tex_coords = in_tex_coords;
    gl_Position = MVP * vec4(in_position, 1.0);
}

Most of the things here are just passing values along. The MVP matrix comes from outside and is the model-view-projection matrix we’ll construct in the main loop. Before we get to that, here’s the uniform helpers that we need to connect uniform values with the shaders:

pub fn set_uniform_f4(shader: &Shader, name: &str, value: [f32; 4]) {
    unsafe {
        let name = CString::new(name).unwrap();
        gl::Uniform4f(
            gl::GetUniformLocation(shader.program, name.as_ptr()),
            value[0],
            value[1],
            value[2],
            value[3],
        );
    }
}

pub fn set_uniform_mat4(shader: &Shader, name: &str, value: Matrix4<f32>) {
    let name = CString::new(name).unwrap();

    unsafe {
        let mvp_matrix_location = gl::GetUniformLocation(shader.program, name.as_ptr());

        gl::UniformMatrix4fv(
            mvp_matrix_location,
            1,
            gl::FALSE,
            value.as_ptr() as *const GLfloat,
        );
    }
}

Finishing up

Okay, so… what now? Let’s go and load something, and then render it!

let gltf = load_gltf("assets\\models\\cube.gltf").unwrap();
let mesh_instance = gltf.instantiate("Cube").unwrap();

let basic = Shader::new("assets\\graphics\\shaders\\basic");
basic.assign();


while running {

    basic.assign();

    let mvp_matrix = create_mvp_matrix(&window);
    set_uniform_mat4(&basic, "MVP", mvp_matrix);

    set_viewport(window.size());
    mesh_instance.render(&basic);

    window.gl_swap_window();
}

And let’s just see to that create_mvp_matrix there:

fn create_mvp_matrix(window: &Window) -> Matrix4<f32> {
    let model = Matrix4::identity();
    let view = Matrix4::look_at_rh(
        &Point3::new(10.0, 10.0, 10.0), // <-- eye at
        &Point3::new(0.0, 0.0, 0.0),    // <-- looking at
        &Vector3::new(0.0, 1.0, 0.0),   // <-- up vector is
    );
    let projection = Matrix4::new_perspective(
        window.size().0 as f32 / window.size().1 as f32,
        45.0,
        1.0,
        100.0,
    );

    projection * view * model
}

And there we go! We have a loaded model, straight out of blender, and it’s rendering without a hitch. The nice thing is that we can similarly export the whole blender scene, together with the camera!

It might seem silly that we only have one cube rendered after all of this, but this now works for any model, and is clean, nice, reasonable code! Most importantly, you have a basis for extending from here – there are so many more things that GLTF has in it, and we don’t cover here.

Next up, I’ll be adding bones, skinning, animations and morph targets, but the most important thing that I want you to take away from this is that it’s not hard. It’s a lot of reading, as coding often is, but take time to enjoy the wonderful things people who have thought A LOT about these things have prepared. GLTF is one of the best formats I have ever seen used, and if you’re dealing with rendering, you should be aware of it! Until next time!


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *