Monday, March 6, 2023

Nippon Ichi And the Love of Grind

The RPG-ification of AAA titles has been going on for so long that it’s almost strange to see the rare game where you aren’t collecting some resource to get stronger and improve one attribute or another. The motivation is obvious: It gives a sense of progression, expression, replayability, and allows players to trade skill for time. It seems hard for AAA studios to release games that are difficult because challenge could dissuade players looking for a cheap power fantasy, but don’t have the time to invest in mastering the games they play. Publishers want the widest possible audiences so they want to know that players can continually make progress and enjoy the game, not get hung up on sudden difficulty spikes. Can’t get past a boss or this mission is too hard? A short little grind should suffice.


Time, being a player’s most valuable resource, should not be taken for granted, which I think most developers are aware of. This is why enemy difficulty often scales with player level rather than becoming DPS checkpoints, and why games like The Division try to fill the map full with various EXP-awarding activities and new guns, and games like God of War carefully control the player’s power level at every step of the journey. “Grinding is just bad game design,” you might read on forums and social media. “Good” games use their RPG mechanics to give you as much power as you need, when you need it.


This does not appear to be how Nippon Ichi sees the world, but I have to take a slight detour for those unfamiliar with their works. Readers familiar with them may skip the following section.

Incredible Power Levels

Disgaea 6 had this trailer that introduced a new level cap. RPG’s rarely go above 100, but this cap was 99,999,999. If you look at the experience calculations for a Disgaea game, you’ll find that the first 100 levels use a standard exponentially-increasing EXP requirement for each level, but then they switch to a different expression and then another later on. Most RPG’s have to cap EXP so low because it would become exuberantly expensive to continue leveling up. Even if there is no cap that players might realistically reach, the content must end before this happens.


But Disgaea doesn’t stop there, it has reincarnation. Much of the end-game of a Disgaea entry is spent reaching max level. Then you reincarnate. This reduces your unit’s level back to one, but retains some of the stats from its previous life. You max its level out again and then reincarnate again. 


This is all, by the way, required for post-game content. The final, final boss is always Baal, an incredibly difficult encounter where only the strongest teams can even make it past the first turn of combat. And it takes, as you’ve probably guessed, a lot of grinding. Grinding to get to max level, reincarnating, and grinding some more.


Reincarnation and grinding are essential features in most of the Nippon Ichi games I’ve played, including: Disgaea, Makai Kingdom, Labyrinth of Refrain and its sequel, etc.. It’s just a company that loves the grind.

Grind Is Good

On its surface, Nippon Ichi’s philosophy seems a step in the wrong direction, but the popularity of their franchises speak differently. Players of most RPGs will talk about the story, characters, and world, perhaps viewing combat as a way to space out the more interesting elements of the game, but players of Nippon Ichi games will talk about the gameplay and achievable power levels, saying “the game doesn’t really start until the post-game.” They’ll talk about ways of gaining thousands of levels worth of experience very quickly after a reincarnation, how to create OP builds, how to use the game’s built-in cheat functions to gain money and experience quickly, etc..


Grind in a Nippon Ichi game isn’t the activity done between story beats, it is the game. Their stories aren’t generally that bad and are full of humor, but they’re not the main attraction.


They accomplish this feat in two ways:

  1. Unparalleled depth.

  2. Seemingly endless content.

Depth: It’s Prinnies All The Way Down.

One of the first concepts players get introduced to in Disgaea is throwing your units around. A unit can only move a number of tiles according to their movement stat, but a unit can pick another up, throw it, and then the thrown unit can move much further. But prinnies are special. Each unit has an “evility” that might trigger at specific moments or add stats, and the prinny evility is to explode when thrown, dealing damage proportionate to health.


Nippon Ichi has a knack for introducing concepts and immediately turning them on their heads. In Labrynth of Refrain, you learn that to unlock doors, you need keys. Simple enough. Also you can break many, often most, of the walls in the dungeon, subverting the key/door dynamic. In Phantom Brave, you can pick rocks, bushes, and all sorts of stuff off the ground and use them to defeat enemies. Also you can carry them back to base and materialize new units from them. In Disgaea, you’ll want to constantly be equipping better gear, but also each item has a whole “item world” to explore and the item’s stats improve the deeper you go.


This depth allows Nippon Ichi games to sometimes be puzzles where you have to find out of the box solutions to strategy problems and sometimes be joyous grindfests where you figure out the most efficient way to increase in power to get beyond a challenge. Most of the time, the player gets to decide which way they want to handle the game. When I first played Labyrinth of Refrain, I had to think very carefully about my team compositions, but in playing its sequel I better understood how to use the stockpile system to keep overleveled throughout much of my run.


A particular moment in Disgaea 5 I keep coming back to in my mind: I was on this map with a bridge over a poisoned water river. Each turn in the river, units will take damage so obviously the bridge is the way to go, but the enemies on the bridge were a bit too powerful for my team. Then it occurred to me I could pick them up and throw them off the bridge to have them take poison damage while my team waited out the battle to clean up the remainders. It’s the kind of thing I could see players going onto forums and saying “cheese discovered in level, trivializes challenge, please fix,” but at the same time the feeling of discovery in that moment was strong enough that I remember it so many years later and the game wouldn’t feel so special to me were it not for the cheese.

More and More Content

Disgaea 5 was the first Nippon Ichi game I actually bothered to finish. I, like many RPG players, often lose motivation towards the end and the massive amounts of time these games can take can make completing them difficult. It can mean not playing new releases from other studios or simply require more free time than we have.


So I saw the ending credits and jumped right back in to see this post-game that people were always talking about. Imagine my surprise to find it wasn’t just bonus challenge levels, but that the story itself kept going, new mechanics were still being introduced. Less content and longer between story intervals and far more grindy, but where the post-game of most games can feel like clean-up work, getting the last few collectibles and viewing scenes you might have missed, the post-game of a Nippon Ichi game is still just the game.


To many, that may seem daunting. RPGs are already hard to invest into because of the time commitment and the idea that one might not be able to finish makes starting seem pointless. It’s an understandable perspective, but these games really push the “destination is the journey” angle.


I have heard game writers in GDC talks lament that only a fraction of the people who play a game they work on will see the story conclude and how they write with this in mind, but as a designer the idea of players enjoying a game I worked on for the pure joy of its mechanical interaction or the aesthetic pleasure of interfacing with the controls is what motivates me. I made a toy box and it’s there whenever you want to pick it up and I hope that it never stops offering good experiences.


Perhaps that’s one of the reasons devs are enjoying making roguelites so much. They offer potentially endless experiences without the need for enticing players with intricate stories or designing finite galleries of content, they can randomly generate content infinitely and players can keep coming back to the same game and frequently find the experience at least a little different from before.


Well, before roguelites were much a thing, there were endless dungeons and Nippon Ichi found a good formula for it.


Conclusions

Nippon Ichi’s philosophy has been a personal inspiration and I adore their games, I love the grind. Players are often right to criticize games for being overly-grind focused when they trap players into repetitive and unfulfilling loops, becoming more work-like than play-like. Well, the criticism probably still stands against Nippon Ichi’s titles, but the difference is in how Ichi’s titles are built to continue to reward players for the act of playing itself. When the core thrust of player motivation is on a steady IV drip of story content then how unfulfilling the core gameplay is only becomes more apparent when the story slows down. But when the gameplay itself is what one draws pleasure from, none of that matters.


Embrace the grind. Love the grind.


Wednesday, July 27, 2022

How Vampire Survivors Made Me Rethink The Concept of the "Core Gameplay Loop"

I first saw Vampire Survivor as demoed on Northernlion’s youtube channel and it struck me as “just another game”. The gameplay looked simplistic, the art looked rough, and the concept didn’t seem like a very big innovation. I didn’t really look at it with any seriousness until I saw “20 Minutes Till Dawn” and “Spirit Hunters: Infinite Horde”. That’s when it occurred to me: this is kinda a big deal. Vampire Survivors only hit the internet in late 2021 and already it has many clones, often by relatively new studios, sometimes even veteran studios. These studios either decided that this should be a trend and would be so big that it was worth the months of development to try and ship a competitor, or were themselves so infatuated with Survivors’ concept that they needed to try their own ideas.


Point is, this game is a pretty big deal and I don’t think I’m alone in having initially thought this game didn’t look like much. Its core gameplay loop seems hilariously simplistic: kill enemies to get gems to gain experience, level up, and kill more enemies. One could probably describe very well-respected roguelites like Slay the Spire and Rogue Legacy the same way, but I think many would also point out the boss battles, the lore, and how skill-based those games feel. I want to put a pin in the skill part for now and focus on bosses and lore, which are not necessarily a part of the core loop, but I think are key to understanding Vampire Survivors.





It took me quite a while to understand Vampire Survivors on a level deeper than “some people find ‘bullet heaven’ fun” and that led me to question the utility of thinking of core gameplay loops as simple loops.


I usually hear this discussed in terms of micro and macro; micro being what you’re doing now and macro being what will happen soon. For Vampire Survivors, the micro would be the loop I described above, but the macro is a bit more interesting. When you go into a stage, you might have a goal such as “explore a specific part of the map,” or “collect gold,” the meta-progression resource. You might be completing a challenge which will unlock some new content upon completion, some of which are to “evolve” a weapon (upgrade a weapon to the max level while having a specific item in your inventory). To accomplish this goal, you might have a specific build in mind. But just having goals or higher level objectives doesn’t seem like enough for a game to  be special. What’s the secret? It takes a moment to explain…


At the start of the game, you have just one weapon and every time you level up, you might gain an additional weapon which changes your build, get an item which enhances the build, or upgrade a weapon which scales your damage. As time goes on, waves get stronger and you need these upgrades in order to match your damage output to keep up with the enemy spawn rate which is also increasing over time, or at least be able to outrun them. On a good run, at some point, you will have outpaced the spawn rate and just mow enemies down–what is often referred to as “bullet heaven”, and from then on the player is unlikely to lose until the timer expires and the reaper comes out. Even from a very simple mechanical base, Vampire Survivors is able to evoke a number of different emotions in the player at different stages. It creates different feelings across each stage which makes them feel more compelling, less monotonous.


This is what I really missed when I looked at Vampire Survivors before, and what shows me the flaw of viewing a core gameplay loop as just a loop. You could look at it as a spiral, too. As the game continues, the loop of kill -> collect -> level -> kill changes and recontextualizes over time. At first, you are weak, but enemies are infrequent, collecting EXP is slow. As you gain another weapon, your ability to chase down enemies increases and you can gain EXP a little faster. Eventually the enemies start to come in faster than the player can kill them and there’s this push and pull as you need to run away to not get killed, but you also need to circle around to collect the EXP they drop. After a few level ups, you get a weapon evolved and become overpowered and nothing can touch you. Leveling up goes from just trying to gather a few extra weapons so you don’t get outpaced to trying to optimize the damage output as quickly as possible and trying to set up evolutions, to maximizing your weapons, to finalizing the build, and finally just cleaning up whatever upgrades are left to collect. Every part of the loop changes as you progress through the game and so after going a few cycles through it, you find yourself not actually in the same place as where you started.





I’ll refer to this as the “core gameplay spiral”; the way that iterations around a core gameplay loop alters the dynamics as the game changes over time or just in general progresses. By breaking the ideas of the core loop, progression systems, and time apart, it becomes harder to understand how they relate to each other and especially treating the core gameplay loop as static discourages understanding how it evolves over time. The core actions you’re doing never change, but how they feel does change, as do the dynamics they create. I further posit that games will probably, generally, be more interesting when their core loop is not static, and this extends to meta-progression, too. 


The one backtrack I have to make is that the “core gameplay loop” is typically only a mechanics-deep analysis that does not consider extraneous elements like dynamics or progression. So maybe saying “core spiral” isn’t a great name because it suggests that the spiral is just the core loop unraveled. It is not, they are separate concepts for different layers of abstraction.


Earlier, I did briefly mention that I think one aspect of Vampire Survivors that might lead some to disregard it is that it appears to have a low skill floor;  that is the minimum amount of skill required to be proficient at a game. Even the skill ceiling doesn't seem particularly high. But I  hope it’s clear that this doesn’t actually matter very much since the progression of the core gameplay loop more than makes up for this. Both “Spirit Hunters: Infinite Horde” and “20 Minutes Till Dawn” add much more skill-based elements with their faster gameplay and “20 Minutes” even requiring that you manually fire your gun, yet I feel they just miss the point. Their core loops are static in comparison to Vampire Survivors. In “Spirit Hunters”, the stats of weapons can change, but the weapons themselves change very little, they don’t evolve, and only occasional pets will give global bufs to your hero. Every round has largely the same goal: collect gems (the meta-progression currency) to unlock more on the web (a very large skill tree, essentially). “20 Minutes”  has more characters and weapons to unlock and you can freely mix and match them, but there is less to unlock, it all unlocks fairly quickly; and since you manually control the weapon and it always fires bullets from the same point, there are just less dramatic changes to the core loop throughout each run. They are both good games and some may prefer one or both over Vampire Survivors, but in my opinion they aren’t quite at the same level.


The insight I have gained from this analysis is that when I’m thinking about my own designs, I want to think primarily about this core gameplay spiral. One with a simple loop at its base that is expanded as the player spends time in the game and creates different dynamics at different points along the spiral. And what I want to avoid is thinking of the gameplay loop, micros and macros, and progression systems as parallel entities. As a practical example, while working on a deckbuilder roguelite, I asked myself “why does picking the second card feel different from picking the first?” and after doing so, I’d realized it did not. I had the progression systems, the loops, and all the features I thought made a good game, but I didn’t have decisions properly compounding on each other and thinking about how the loop should change over time pushed me to make some significant changes to my mechanics, hopefully for the better. And so I hope that this analysis might similarly help others see their own projects in a different way, ask the right questions, and find better solutions.


Monday, May 2, 2022

Game Design as a Dialogue

Media criticism is one of my favorite uses of Youtube and somehow, the platform unearthed a really great set of critics to choose from, for movies. There are  Lindsay ellis , Folding Ideas, Filmento, CJ the X, Filmento, etc., but in games we have far fewer. GMTK is the current gold standard, then there are  Adam Millard, Design Doc, Dunkey, Noah Cadwell, Psych of Play, and a few others strewn here and there. Furthermore, comparing channels focussing on game design or analysis vs movies and books, I always felt something was missing.


For a case study of good media criticism, I want to look at Lindsay Ellis’s video essay, “That Time Disney Remade Beauty and the Beast”. The essay discusses the history of Beauty and the Beast, differences in adaptations, social context, political context, how it fits within the Disney brand and how this impacted the motivations of the film, thus creative choices in its development. Then, with all this context, when we arrive at what is the core issues that Ellis takes with the plot, we not only gain a greater understanding of how the creative decisions affected the piece as a whole. but how those decisions form part of a dialogue that extends beyond the movie itself, This grants us new insights on the film and the culture that produced it.


It’s this dialogue that I feel is most absent from discussion of video game criticism and analysis. Many channels discussing design of a specific topic in games, like jumping physics, will happily discuss all the different ways jump physics may be implemented, and how that affects the way in which the different games play, much more rare is a discussion of the dialogue between different designers and between game makers and players of different games and how and why they chose different jump physics. The kind of analysis seen most often serves as more of a collection of useful ideas and it’s left up to the watcher to pick among the options they like. While this kind of content is unambiguously good and we could use more of it, we also deserve deeper analysis that engages with the larger dialogue, not just describes its existence. The problem is that without engaging in this dialogue, the viewer does not gain insights they could not have otherwise gained by just being aware of the games being discussed.


For an example, I’d like to use Design Doc’s “How Do You Improve Turn Based Combat?”. This video serves as a good 101 course in understanding what kinds of turn based systems have been experimented with and various ideas in it. DD does a good job of talking about how the systems work and how they affect the player, and what components go into turn based combat design. Again, this is unambiguously good content, but it doesn’t contain entirely novel ideas that add to the discourse or do much to explain why one game might have chosen one system over another.


For contrast, Hbomberguy’s (Harris Michael Brewis) more focussed video, “Bloodborne Is Genius, And Here's Why” is one if the most influential discussions of game design, for me, personally. The central bit that always stuck out was his story of a friend of a friend who didn’t so much get the Dark Souls games, but perhaps learned to play them differently by Bloodborne conditioning them to approach them in a new way. This led to them having more fun. Through this story, we gain some insight that the way we play can affect our  enjoyment of a game, and the how a game is structured can impact how we play. To get to this point, Brewis discusses the history of Dark Souls, how it relates to other games, personal motivations and ideas about games, references other works and discussions of game design. Importantly, one point I really don’t want to overlook is how Brewis includes the player in the dialogue by referencing how other players respond to Bloodborne and relate to it rather than speaking entirely from their singular perspective.


To add my own critique to the pool, I want to talk a bit about dodge/parry canceling and why it’s a very important, but frequently omitted feature of action games and Dark Souls (DS) in particular. DS is so synonymous with “difficult” that it is often considered the defining feature and almost any difficult game is inevitably compared to it. Hidetaka Miyazaki allegedly didn’t set out to make a hard game, telling metro.co.uk in an interview, however


“I personally want my games to be described as satisfying rather than difficult. As a matter of fact, I am aiming at giving players sense of accomplishment in the use of difficulty.”


As Noah Cadwell points out in their video, “I Beat the Dark Souls Trilogy and All I Made Was This Lousy Video Essay”, the series is more inviting to players of all skill ranges than the “git gud” crowd seems to think. 


As a game synonymous with “difficult”, difficult games seem to have taken some of its design ideas from DS. The one I find most puzzling is the omission of dodge and parry canceling or even the omission of parrying altogether, not because DS doesn’t have parrying, but because it’s underutilized. 


In DS, attacking involves patience, planning, spacing, and strategy because starting an attack locks you into its animation which can only be canceled after its active frames have completed. This gives DS a slower, more deliberate pace in contrast to the twitch reflex feel of more action-oriented games.


For twitch-based action games, dodge canceling encourages more aggressive behavior from the player since they can always hit the “get out of danger” button at any time. To incentivise more skilled, less risk-averse play, games like Beyonetta reward the player specifically for dodging at just the right time with some slow-mo where the player can deal significant damage before the enemy can respond. YS VIII gives slow-mo for perfect dodges and temporary invincibility for perfect parries and allows players to potentially have both buffs at once. A well-timed dodge feels good any day of the week, but many designers seem to have wanted to embellish this feeling with extra mechanical satisfaction.


When FromSoftware made Sekiro after DS3, dodge and parry canceling were indeed features, but the game had a different aesthetic. DS featured a dark, sinister, and cruel world where punishing the player for impatient and poorly-timed attacks added to this dark aesthetic. You are a nobody; a simple, clumsy soldier facing foes far above your weight class. While Sekiro features a similarly dark world, you play as a named protagonist with a refined fighting style, use stealth when it suits you, picking when and how to attack. DS wants you to feel powerless, Sekiro wants you to feel powerful and so it gives you a dodge cancel, even giving a high-damage, cinematic counter attack for a perfect parry.


Yet, many contemporary action games such as Death’s Door and Sifu are missing this. Why? Death’s Door in particular has combo strings where every attack in the string uses the same animation and the string is of an arbitrary length determined by the weapon’s upgrade level and cannot be canceled out of. The most I ever heard about why the combat system was designed the way was just a snippet from a noclip documentary where one of the team said for this kind of game, you want faster, instant attacks compared to some other games. Was dodge cancellation discussed and omitted for specific reasons? Would the game be better or worse with it? Is this a holdover from Dark Souls’ legacy as part of some anti-dodge canceling movement in game design or an isolated decision?


Now, I hope it doesn’t sound like I’m negging on Death’s Door’s omission of dodge canceling for no reason, but I do think it’s an interesting example of a lacking in the discourse surrounding game design. When I hear people discuss the game, people will often reference the story, funny writing, art, music, creative bosses, and presentation while spending little time discussing the combat despite players spending a large portion of the game immersed in battles. And that’s kind of interesting in its own right. Perhaps the combat is as good as it needs to be, but not really a central focus of the player’s enjoyment, or perhaps the other elements of the game are just that good. That in itself gives us some ideas about what might be most important to players and how they view games. But designers spend a lot of time deliberating over small decisions like what features to include, what are must-haves and what might be stretch goals, and trying to figure out if they matter–if so, why?


Personally, I think the absence of dodge rolling in Death’s Door is interesting because it follows a trend in contemporary action games, but also in that it might suggest its roots in 2D Zelda games. Sifu references older beat ‘em up while feeling very modern (despite also feeling like PS2’s God Hand). Both games fit into modern gaming as having retro inspiration with modern twists, asking us to look back at gaming’s past to see if there’s anything we’ve missed before we continue forward.


This brings me to why I decided to write this piece. A dialogue exists between designers and between designers and players, but it is all too often not spoken. Someone makes a game, another developer sees this and is inspired to make their own, borrowing or stealing ideas, adding new ones, and over time the craft matures. Players develop an aesthetic sense of pleasurable and unpleasurable experiences, they discuss them on social media, they buy games or they don’t. This dialogue permeates all of gaming culture and the games industry, influencing what game makers make and what gamers play. Yet, when we discuss individual game design decisions, we often discuss them in a vacuum rather than as a part of a complex web of desires, motivations, aspirations, and ideas that play off each other. Every decision a games studio makes, from the color of a health bar to the nuances of the mechanics, are a part of a story that extends back to pong and further, and will continue being written as long as people make games.


So, when looking at a game and appreciating that it is fun, or has generally high production values, or just is interesting in some way, I would encourage the reader to think about this game as part of this greater dialogue. What does it say about how games should work? What does it add to the discussion? What may be left out? What were the intentions of the designer and what kind of experience did they want players to have?

Friday, August 7, 2020

OpenGL tutorials can do better than raw pointer math.

 If you've ever followed an OpenGL tutorial, you probably saw code like this:

  float verts[] = {
    // Position   TexCoords
    -X, -Y, Z,    0.0f, 1.0f,
     X, -Y, Z,    1.0f, 1.0f,
     X,  Y, Z,    1.0f, 0.0f,
    -X,  Y, Z,    0.0f, 0.0f
  };

  GLuint vbo = glGenBuffer();
  glBindBuffer(GL_ARRAY_BUFFER, vbo);
  glBufferData(GL_ARRAY_BUFFER, sizeof(verts), verts, GL_STATIC_DRAW);

So far not... terrible. The verts array may contain additional color information or random other things, but it'll basically look like this. Later, we have to tell the graphics card to actually draw this data, informing our GLSL shaders how to read it:

  GLint vertex_pos_attrib = glGetUniformLocation("vertex_pos");
  glEnableVertexAttribArray(vertex_pos_attrib);
  glVertexAttribPointer(vertex_pos_attrib, 3, GL_FLOAT,
                        GL_FALSE, 5 * sizeof(float), (void*)0); // A
  GLint tex_coord_attrib = glGetUniformLocation("tex_coord");
  glEnableVertexAttribArray(tex_coord_attrib);
  glVertexAttribPointer(tex_coord_attrib, 2, GL_FLOAT, 
                        GL_FALSE, 5 * sizeof(float), (void*)(3 * sizeof(float)));

and this is where these tutorials seem to be dropping the ball. The call I marked with A is sending information to the graphics card that the GLSL variable, vertex_pos, should be filled with two floats, or 3 elements of data, a stride of 5 * sizeof(float) bytes between vertices, and 0 bytes offset from the beginning of the vertices array buffer. The next call passes in nearly identical information, but 2 elements and 3 * sizeof(float) bytes from the beginning. The extra (void*) cast is just converting the argument to the expected type.

This code is brittle for just too many reasons.

  1. If vertex information is added or removed, all the pointer offset math has to change.
  2. The same goes for if the datatype changes, like from an int32 to int64.
  3. If one forgets the * sizeof(float) part, there could be trouble when moving to other data types as one might guess the size in bytes wrong.
  4. The math gets more complicated if types of varying sizes are used.

Just use a struct

  struct Vertex {
    GLfloat pos[3];
    GLfloat tex_coords[2];
  };

  float verts[] = {
    // Position     TexCoords
    {{-X, -Y, Z},  {0.0f, 1.0f}},
    {{ X, -Y, Z},  {1.0f, 1.0f}},
    {{ X,  Y, Z},  {1.0f, 0.0f}},
    {{-X,  Y, Z},  {0.0f, 0.0f}}
  };

Now our data is more organized. We can share information about the binary layout of our code with other parts of our program, write functions for generating  points, constructors, and all sorts of good stuff. We can even use other, less basic data types, in theory. My code uses glm::vec3s here, for example.

We need the corresponding glVertexAttribPointer calls to change a bit, too. I've seen a number of attempts at this, sometimes with the idea that one can take a member pointer and deference it at null in order to get the offset, then convert it back into a pointer. Or one could just use offsetof(). Finally, we can achieve something like this:

glVertexAttribPointer(tex_coord_attrib, 
                      sizeof(Vertex::tex_coords) / sizeof(GLfloat), GL_FLOAT, 
                      GL_FALSE,
                      sizeof(Vertex),
                      (void*)offsetof(Vertex, tex_coords));

and we're pretty close to something approaching ideal. It's unfortunate that we have to know that Vertex::tex_coords is made up of GLfloats, though. It could be advisable to define a global, constexpr unsigned int D = 3, which is used both instead of hardcoding in the Vertex definition and here instead of sizeof(), but the gains are marginal since we still have to pass in GL_FLOAT as the next parameter

Still, to improve further...

Generalize

This is the actual code I use:

template<>
struct GlTraits<GLfloat> {
  static constexpr auto glType = GL_FLOAT;
};

template<typename RealType, typename T, typename Mem>
inline void vertexAttribPointer(GLuint index, GLboolean normalized,
                                const Mem T::*mem) {
  // This is basically C's offsetof() macro generalized to member pointers.
  RealType* pointer = (RealType*)&(((T*)nullptr)->*mem);
  glVertexAttribPointer(index, sizeof(Mem) / sizeof(RealType),
                        GlTraits<RealType>::glType, normalized, sizeof(T),
                        pointer);
}

Mem here is often going to be a glm::vec3 or something similar, T being my vertex class. Unfortunately, RealType needs to be passed in as an explicit type since neither Mem nor T suggest what the OpenGL type actually is. Theoretically, if this library knew what a glm::vec3 was, it could also deduce the real type from the parameter, or one could specialize GlTraits for it and include a typedef or using statement that gave the "real" type.

I use the function like this:

  vertexAttribPointer<float>(tex_coord_attrib, GL_FALSE, &Vertex::tex_coord);
  vertexAttribPointer<float>(vertex_pos_attrib, GL_FALSE, &Vertex::pos);

Simple!

Conclusion

Many OpenGL tutorials are suited for younger, less experienced programmers and so--giving their authors some credit--writers may not want to confuse readers with language features their readers hadn't seen before, like offsetof(), but I think this does a disservice to beginners as these tutorials are also authoritative on how good, well-written OpenGL code looks. They may not want to go into the template madness that I enjoy, but even tutorials targeting C or C++ can be significantly improved just by using a data structure for vertex information. Beginners should be taught to write resilient programs that resists the urge to hardcode sizes, types, and offsets based on their specifications at the time of writing and instead rely on the compiler more.

Brittle code can punish beginners when they try to expand their knowledge as the slightest changes can cause their screen to stop rendering, or render odd artifacts and this can be difficult to debug. At worst, it will cause segfaults.

Wednesday, July 29, 2020

The Entity Component System

Note: code highlighting thanks to http://hilite.me/

Note2: This blog is primarily on template meta-programming--when the opportunity presents--using it to solve practical problems. This post assumes knowledge of C++17 fold expressions. If the reader is unfamiliar, this may not be the best learning material, but if the reader gets the idea and wants a practical example, please read on.

Update July 30 2020: I realized a bit late a bug where non-const values weren't always respect in the final solution. I haven't tracked it down yet so just be warned.

Update July31: Fixed a bug that caused an infinite loop.

When I started programming, I made a simple game called Orbital Chaos and the code was organized very simply. 

class Actor {
  // ...
};

class Player : public Actor {
  // ...
};

class Enemy : public Actor {
  // ...
};


So each thing that an actor could do was represented in the base class and the derived classes overrode that behavior in classic OOP style. Those were simpler days.

I decided to rewrite my code to what I now understand is a fairly industry-standard idea: The entity component system or ECS for short. From what I gather, an ECS is a simple system where entities are merely ID's with a set of components which define all relevant information important to that entity. For example, its <X,Y> coordinates in the 2D space of a 2d game, the type of an entity ({TREE, ITEM, PLAYER, etc...}), perhaps information on how to draw it... basically anything that would have gone in Actor or Player.

Proponents of ECS systems will argue for being a better alternative for OOP, not requiring the base class to consider every possible action or attribute in the game and cache efficiency--storing like information close together so that iterations over all entities hopefully involve less cache misses. Sometimes it's also argued that using ID's to relate objects to each other is much safer than direct pointers. The alternative is often std::shared_ptr with locking. Lastly, entities are easier to extend, mix, match, and define than objects, not having to worry about hierarchies, mix-ins, and multiple inheritance to get the desired behavior.

Personally, I think writing a base class that considers every possible action is not good OOP, but doing it the "right" way is probably less appealing than the data-oriented ECS style. OOP works great for abstract objects with variations on behavior, but game entities tend to vary more on configuration. I think this simplicity sometimes comes at the cost of having to reinvent mix-ins and hierarchies, but one can still put pointers and virtual in an ECS by storing pointers. 

Information on ECS's on the internet varies in quality as do their implementations. I won't try to argue that I will present an objectively better implementation than any others, these are just my ideas and the story of my process to write one.

Ideas I didn't like

class Entity {
  GraphicsComponent gfx_component;
  PhysicsComponent phys_component;
  EtcComponent etc;
  // ...
};

I think this is closer to the "right" OOP solution as it encapsulates the complexity into smaller pieces which can then be as simple or intricate as necessary. In practice, the segmentation of code logic and how systems relate to each other may become unclear. In a grid-based game like mine, entities will have an integer <X,Y> position on the grid which maps to a very different position when rendering to the screen which may even by offset slightly by animations. How do components refer to each other? It encourages using virtual functions to customize behavior, which is very tidy, but also involve a lot of boilerplate code and isn't incredibly efficient. Lastly, it doesn't store like data together.

There's also the idea of having the Entity store a vector of component objects and it's one of the more common implementations I see, but this has the additional issue that one class needs to consider all the complexity of the game. For example, if each component object has a init(), update(), etc() functions, they have to understand how to update themselves and interact with the world, but the world needs to understand how to contain them leading to two mutually referential system. That can be fine, but in my previous attempt, I found this really complicated the definition of more interesting components.

using ComponentId = unsigned int;

template<typename T>
struct ComponentData {
    EntityId id;
    T data;

    ComponentData(EntityId id, T data)
        : id(id), data(std::move(data)) { }
};
class Buffer { char* data = nullptr; // ... }; class EntityComponentSystem { std::unordered_map<ComponentId, Buffer> components; };

I think this idea gets closer. Each component type has a unique ComponentId which is determined by the type system through template magic and each Buffer holds an array of these components that is manually managed by itself since std:: classes like vector need to know what type it contains and how large it is at compile time. This std::unordered_map could be replaced by an std::array if the number of different component  types are known in advance.

Actually, if the component types themselves are known and I happen to love tuples, we could just do this:

template<typename T>
struct ComponentData {
    EntityId id;
    T data;

    ComponentData(EntityId id, T data)
        : id(id), data(std::move(data)) { }
};

using EntityId = unsigned int;

template<typename...Components>
class EntityComponentSystem {
  std::tuple<std::vector<ComponentData<Components>>...> components;
};

and that's what I ended up with in my first iteration (link). One big and notable downside is that now, if I have a grid position, for which I use my custom Vec<int> class, and a graphical position, which uses a Vec<int> as well, they'll be stored as the same component. It's tedious, but I implemented GraphicalPosition and GridPosition classes to differentiate.

It also means that the declaration of the ECS type needs to contain every component used in the game, but I don't find that too problematic.

From the above idea, we could construct a simple interface like this:

  EntityComponentSystem<Pos, TextureRef> ecs;
  auto e1 = iu.write_new_entity(Pos(1, 2), TextureRef(tex1));
  auto e2 = iu.write_new_entity(Pos(3, 4), TextureRef(tex2));
  for (auto id : ecs.ids()) {
    Pos p;
    TextureRef tref;
    if (ecs.read(id, p, tref))
      draw(p, tref);
  }

In the end, I use a much more efficient interface, but this gives an idea about how it might be used. e1 and e2 are entity ID's that have a Pos and TextureRef component which can be read back out in order to draw them onto the screen if the entity has both components.

One note, before we move on: Why an std::vector and not a hash table keyed by the entity ID? Many entity component systems do it this way and it's great for O(1) lookups, but the code I have planned out has little need for fast random access and hash tables don't make good use of CPU caches. If I run into performance issues, I can see if a hash fixes it, but for now I'll stick to this. I think the larger concern is that a particle system might require more efficient random deletion.

If only life was so simple...

Since my entity ID's monotonically increase (omitting int32 overflow), keeping the component vectors sorted is generally simple enough. I used std::lower_bound() to figure out where to insert new elements, just in case, but generally entities don't gain or lose components over time so they're almost always inserted at the end. The component arrays are essentially parallel where std::get<std::vector<ComponentData<X>>>[i] would resolve to the i'th X component of probably the i'th entity, and ditto for Y. However, not all entities have all components so some checking might be needed.

Say I want an algorithm that does this:

for (grid_pos, graphical_pos) in ecs.read_grid_and_graphical_pos() {
   graphical_pos = grid_pos * TILE_SIZE
}

Well, if I can iterate through one, I can do two, right? Start at the beginning, make sure they're pointing at the same element, and in the increment step, increment one of them and then the other until it has the same EntityId or higher and keep going back and forth until either hits the end or they both have the same ID again.

Great! Simple! Problem: I have potentially N iterators and they're all of different types. I want this syntax:

int sum = 0;
for (auto [id, i, u] : ecs.read_all<int, unsigned>()) sum += i + u,

Obviously, this is non-trivial with tuples since we can't store iterators to each container in a vector, we have to store them in yet another tuple.

My first idea was to have a range class parameterized by the containers (std::vector<ComponentData<T>>...) and then for it to define nested iterator classes parameterized by the iterator types of that container. The basic issue with this is that the container could be const or non-const, but I had to be able to differentiate since in one case, the iterator class would store std::vector<T>::iterator and in the other, ::const_iterator so I ended up writing it twice. But what I landed with is so much better.

template<typename F, typename...T>
constexpr auto tuple_map(F&& f, const std::tuple<T...>& t) {
  return std::tuple(f(std::get<T>(t))...);
}

template<typename F, typename...T>
auto tuple_map(F&& f, std::tuple<T...>& t) {
  return std::tuple(f(std::get<T>(t))...);
}

// A range abstraction that allows multiple component data series to be iterated
// lazily over in a range-based for loop.
template<typename StoreTuple>
class ComponentRange {
  // A tuple of references to component storage.
StoreTuple stores_; protected: template<typename IteratorTuple> struct Iterator { // ,,, IteratorTuple its; IteratorTuple ends; bool sentinel = false; // To know if all iterators are pointing to the same entity, we'll neet // to remember what entity that is. EntityId max_id; // ... Iterator(IteratorTuple its, IteratorTuple ends) : its(std::move(its)), ends(std::move(ends)) { // ... } Iterator() { sentinel = true; } }; public: explicit ComponentRange(StoreTuple stores) : stores_(stores) { } auto begin() const { auto b = [](auto&& store) { return store.begin(); }; auto e = [](auto&& store) { return store.end(); }; return Iterator(tuple_map(b, stores_), tuple_map(e, stores_)); } auto end() const { return decltype(begin())(); } };

The general idea is to be as unconstraining of types as possible so where ever we can get away with not specifying a type, we do. StoreTuple will generally resolve to something ugly like...

std::tuple<std::vector<ComponentData<X>>&, std::vector<ComponentData<Y>>&, ...>

tuple_map and tuple_foreach (the same function, it just doesn't return a new tuple) are going to be the bread and butter of this class. For reference about how difficult these functions were to write in C++11/14, I implemented them way back in "Fun With Tuples".

If any of the its is at the end, the whole tuples is, which made comparing  to the end a bit tricky, thus a sentinel is used. I think a variation on this idea could have been to consider two Iterators equal if any of their its are equal, though.

We can't iterate through the tuples easily  so it's best we work on them uniformly, but we don't want any advancing past the max_id, which is going to be the highest ID of an entity of all iterators. Just iterating once is easy.

    template<typename It>
    bool at_end(It it) const {
      return it == std::get<It>(ends);
    }
    // Note that the alternative to the above is
    // std::get<std::decay_t<decltype(it)>>(ends) if auto-typed (e.g. lambda).

    template<typename Iter>
    void increment_iter(Iter& it) {
      if (!at_end(it)) ++it;
      if (!at_end(it)) max_id = std::max(it->id, max_id);
    }

This handles the case where an iterator was under max_id, but in an attempt to catch up it went over and set a new maximum.

Next, we have to handle incrementing all the iterators until they all line up on one ID.

    // A helper to catch up iterators that are behind.
    void increment_if_lower_than_max_id() {
      auto impl = [&](auto& it) {
        if (!at_end(it) && it->id < max_id) increment_iter(it);
      };
      tuple_foreach(impl, its);
    }

    bool any_at_end() const {
      auto any = [](auto...args) { return (args || ...); };
      auto pred = [this](const auto& it) { return at_end(it); };
      return std::apply(any, tuple_map(pred, its));
    }

    // True if all iterators point to the same ID.
    bool all_same() const {
      auto all = [](auto...args) { return (args && ...); };
      auto equals_max_id =
        [this](const auto& it) { return it->id == max_id; };
      return std::apply(all, tuple_map(equals_max_id, its));
    }

    // After iteration, or on initialization, increment any iterators that
    // are behind.
    void catch_up() {
      while (!any_at_end() && !all_same()) increment_if_lower_than_max_id();
if (any_at_end()) its = ends; } Iterator& operator++() { // Increment at least once. tuple_foreach([this](auto& it) { increment_iter(it); }, its); catch_up(); return *this; } Iterator operator++(int) { Iterator old = *this; ++(*this); return old; }

And finally, comparisons.

    bool operator==(const Iterator& other) const {
      return sentinel ? other == *this : any_at_end();
    }
    bool operator!=(const Iterator& other) const {
      return !(*this == other);
    }

    auto operator*() const {
      auto data = [](const auto& it) -> auto& { return it->data; };
      return std::tuple_cat(std::tuple(max_id), tuple_map(data, its));
    }

The final gotcha is that on construction, not all the iterators might be pointing to the same ID, so we have to call catch_up() immediately. The full code for the constructor looks like this:

    Iterator(IteratorTuple its, IteratorTuple ends)
      : its(std::move(its)), ends(std::move(ends)) {
        max_id = EntityId{0};
        tuple_foreach(
            [&](auto it) {
              if (!at_end(it)) max_id = std::max(max_id, it->id);
            }, its);
        catch_up();  // Ensure a valid initial starting position.
      }

    Iterator() { sentinel = true; }

Conclusions


The implementation is incomplete at the time of this writing and some of the interface is inconsistent or unnecessary, but it's a start.

And for the project in which I'm using it: https://github.com/splinterofchaos/py-srpg (Originally written in Python; the C++ conversion is on the cpp branch at the time of this writing.)

Writing code like this used to be tedious, difficult, and involve heaps of boilerplate, but at this point, fold expressions over a variable number of types and arguments feels easier than a for loop from i=0 to N.

The biggest frustrations for functional programming in C++ remain being things like how in order to take a template function and convert it to a callable function object, one has to wrap in a lambda and there exists weak partial application. For example, in our begin() function.

  auto begin() const {
    auto b = [](auto&& store) { return store.begin(); };
    auto e = [](auto&& store) { return store.end(); };
    return Iterator(tuple_map(b, stores_), tuple_map(e, stores_));
  }

But things like fold expressions and class template argument deduction make things absolutely much easier.

As for the iterator class, the logic is much less trivial than I'd hope an iterator class to be, but I think the performance should be decent. If my program ends up spending more time incrementing iterators than using the values obtained, I think my bigger issue will be that my game's not doing much. Most of the iterations through the ECS should be of very similar entities and so in the general case, it should increment once and exit.

As for the entity class itself, I'll have to test it out a bunch. Obviously, I worry that adding or removing large numbers of entities could be problematic, but I imagine it'll be good enough for my purposes and there's plenty of room for optimization, like maintaining a free list of deleted entity IDs and component data, for one example. Still, when I'd used the same in Python, it worked well so it should be better here.