If you've ever followed an OpenGL tutorial, you probably saw code like this:
float verts[] = { // Position TexCoords -X, -Y, Z, 0.0f, 1.0f, X, -Y, Z, 1.0f, 1.0f, X, Y, Z, 1.0f, 0.0f, -X, Y, Z, 0.0f, 0.0f }; GLuint vbo = glGenBuffer(); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(verts), verts, GL_STATIC_DRAW);
So far not... terrible. The verts array may contain additional color information or random other things, but it'll basically look like this. Later, we have to tell the graphics card to actually draw this data, informing our GLSL shaders how to read it:
GLint vertex_pos_attrib = glGetUniformLocation("vertex_pos"); glEnableVertexAttribArray(vertex_pos_attrib);
glVertexAttribPointer(vertex_pos_attrib, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)0); // A
GLint tex_coord_attrib = glGetUniformLocation("tex_coord"); glEnableVertexAttribArray(tex_coord_attrib); glVertexAttribPointer(tex_coord_attrib, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(float), (void*)(3 * sizeof(float)));
and this is where these tutorials seem to be dropping the ball. The call I marked with A is sending information to the graphics card that the GLSL variable, vertex_pos, should be filled with two floats, or 3 elements of data, a stride of 5 * sizeof(float) bytes between vertices, and 0 bytes offset from the beginning of the vertices array buffer. The next call passes in nearly identical information, but 2 elements and 3 * sizeof(float) bytes from the beginning. The extra (void*) cast is just converting the argument to the expected type.
This code is brittle for just too many reasons.
- If vertex information is added or removed, all the pointer offset math has to change.
- The same goes for if the datatype changes, like from an int32 to int64.
- If one forgets the * sizeof(float) part, there could be trouble when moving to other data types as one might guess the size in bytes wrong.
- The math gets more complicated if types of varying sizes are used.
Just use a struct
struct Vertex { GLfloat pos[3]; GLfloat tex_coords[2]; }; float verts[] = { // Position TexCoords {{-X, -Y, Z}, {0.0f, 1.0f}}, {{ X, -Y, Z}, {1.0f, 1.0f}}, {{ X, Y, Z}, {1.0f, 0.0f}}, {{-X, Y, Z}, {0.0f, 0.0f}} };
glVertexAttribPointer(tex_coord_attrib,
sizeof(Vertex::tex_coords) / sizeof(GLfloat), GL_FLOAT,
GL_FALSE,
sizeof(Vertex),
(void*)offsetof(Vertex, tex_coords));
Generalize
template<> struct GlTraits<GLfloat> { static constexpr auto glType = GL_FLOAT; }; template<typename RealType, typename T, typename Mem> inline void vertexAttribPointer(GLuint index, GLboolean normalized, const Mem T::*mem) { // This is basically C's offsetof() macro generalized to member pointers. RealType* pointer = (RealType*)&(((T*)nullptr)->*mem); glVertexAttribPointer(index, sizeof(Mem) / sizeof(RealType), GlTraits<RealType>::glType, normalized, sizeof(T), pointer); }
vertexAttribPointer<float>(tex_coord_attrib, GL_FALSE, &Vertex::tex_coord); vertexAttribPointer<float>(vertex_pos_attrib, GL_FALSE, &Vertex::pos);