A Late Postmortem of "Monster Defense"

May 18, 2019 code rambling

A few months ago, I uploaded the full source code and assets I used in an unfinished project I worked on back in 2011/2012 to Github. However, as is typical of me lately, I forgot to take the opportunity to write about it here. This is going to be a very informal postmortem that is probably better described as me writing random things in an unorganized manner about an old project of mine.

"Monster Defense" was to be an arena shoot-em-up / bullet-hell type of arcade game where you are supposed to survive an onslaught of monsters that come at you in waves. Monsters you kill can drop power-ups and other random power-ups will spawn periodically throughout the level as you go. There were two different planned modes of play, "timed" and "endless" which are probably fairly self-explanatory.

The artwork is all placeholder. Because I opted to use Quake 2 MD2 models for reasons discussed later in this post, I found that using the wealth of character models from Egoboo (whose assets are freely available under a GPL3 license) was helpful in getting reasonable placeholder artwork going. The problem is I of course never got "real" artwork in the end to replace the placeholders. D'oh.

Not exactly a project worthy of any awards in unique design, heh. Good thing I don't fancy myself a game designer.

Anyway, I suppose I might as well start with a little background on how this project came to be.

Background and Timeline

For most of my side-projects, especially game projects, I find I tend to have a very strong Not Invented Here syndrome approach where I rather enjoy building up the groundwork myself rather then using some preexisting framework or whatnot that may be available. For game projects, this means that you will probably never find me using something like Unity or Unreal Engine or something like that. Lower-level libraries like SDL are a different story, especially if you want to target multiple platforms. In fact, I think something like SDL is really a sweet spot for people like me in that it provides just enough of an abstraction that I can easily target multiple platforms but build up my own engine from scratch if I so choose. In recent years, I've found myself becoming more accepting of using something for game projects that provides a little bit more abstraction, such as libGDX, while still not going to all-out Unity-levels of abstraction.

But back in 2011, I had been working on my own C/C++ framework for my game projects and it was just based on SDL for desktop PC builds. It was coming along well, and in early 2012 I actually left my job to start working on the project that would become Monster Defense full-time. I had grandiose ideas that I was going to somehow make it as an indie game developer. Heh. Well, that was part of the reason I left my job. The other big part was that I was getting a little bit fed up and noticed that I was really just coasting along at work so I guess this was my own way of shaking things up for myself. I think I knew in the back of my head at the time that there was absolutely no chance I was going to make it as any kind of remotely successful indie game developer. I have barely any artistic talent (at least, not enough to be able to draw my own complete set of game art assets) and as you can tell from the fairly uninspired "design" of Monster Defense described previously, I'm not exactly a game design genius.

Actually, I should point out that I was originally going to work on a completely different game. I even mocked up a promo image for it.

I don't now remember why I switched the project to Monster Defense instead. I think the reason might have been that I thought it would be an easier project to finish or something like that. Of course that wasn't true. Both project ideas were quite involved. But hindsight is (usually) 20/20...

I remember that within two or three months of quitting my job, I had fully accepted the fact that Monster Defense was going to just be a portfolio piece I would use to get a job in the games industry and that there was no way it was going to become a fully polished and published game that would make me money. Didn't really take long for reality to fully set in, right?

I continued working on it and around late fall or so of 2012, I think I had come to realize that I actually didn't want to work in the games industry. Instead, I was fine with game development just being a hobby. This was actually the second time in my life I had come to that same conclusion. The first was in 2005 after I had finished a two year college diploma and was about to commit to moving out west to attend the game development course at the Art Institute of Vancouver (apparently this has all changed significantly in the past decade or so, I don't think it exists in Vancouver anymore?). I re-thought the decision and ended up getting a job as a "normal" software developer instead.

Anyway, during summer and fall of 2012 I had started to notice that I was becoming less enthusiastic about working on Monster Defense as time went on. A big part of it was the realization that I was not ever going to be able to release it as a finished product using placeholder art assets as I was currently using. And I was still unsure exactly what was the best way to solve this problem. You could say that, obviously, recruiting an artist was the solution here. I was not so enthusiastic about that idea because the last time I worked on a game project in a team with an actual artist, the experience left a really bad taste in my mouth.

Around 2006/2007 or so, I was working on a Zombie survival / puzzle game for Nintendo DS (using homebrew development tools, like devkitARM) in a team of friends, one of whom was an artist and two others who also had some varying artistic talents as well. That project ended abruptly with our artist seemingly just disappearing (I actually never met him in person, but he was a friend of a friend and lived in Vancouver). In the end no artwork had actually been produced. At least, none that I ever saw. In fact, the only thing that had ever been produced during this failed project was a fairly basic 3D engine for the Nintendo DS (coded from the ground-up by yours truly) and a very, very overly ambitious game design document that detailed a plan we were honestly never going to come close to completing. The overly ambitious design is a different problem altogether and I think is a very common problem, but the artist randomly disappearing was hugely problematic and ruined whatever trust I may have had in a team composed mostly of friends being reliable. Actually, now that I think about it, this was the second time that I've had an artist disappear on me (the first was for a QBasic game project around 2001/2002 or so). Maybe the problem is me? Heh. *shrugs*

Very early test build of the Nintendo DS project. Bits of this code ended up being used in Monster Defense too!

But regardless, even if I had wanted to try finding an artist again to see how it went, I did not have much money left to pay for one and I was skeptical of being able to find one who would volunteer to help me out on the promise of splitting future profits once the game was ready to be sold. Especially since I no longer really had that end goal in mind anyway. I didn't want to be dishonest, recruiting someone on that premise when I no longer believed it would happen at all.

As a result the project basically ended right there during fall 2012 and in early 2013 I found a new full-time job. I continued working on Monster Defense in bits here and there during early 2013 but more just as "something to do."

The Project Itself and How I Built It

As mentioned earlier, Monster Defense was built from the ground up using SDL for desktop PC builds. I developed it using C/C++, in a "C with classes" coding style, targeting OpenGL ES 2.0 (or the equivalent desktop OpenGL feature-set). I built my own framework and game engine for this project. I also was targeting mobile devices through the use of the now defunct Marmalade SDK (formerly Airplay SDK).

Before SDL 2.0 really stabilized and ironed out their mobile device support, Marmalade SDK was really quite awesome I think. It provided an environment where you could code your game using a quite familiar feeling game loop, C-style, in your main() function. It had some higher-level graphics/audio/input abstraction support but you could also opt to just use OpenGL directly and ignore their own graphics library which is what I did.

I updated my game framework to support different platforms mainly via an abstraction provided in two interfaces, OperatingSystem and GameWindow. I had two implementations, one for SDL for desktop builds, SDLSystem and SDLGameWindow, and the other for Marmalade SDK for mobile builds, MarmaladeSystem and MarmaladeGameWindow. This platform abstraction carried on a little bit further with interfaces like Keyboard, Mouse, Touchscreen, File, and FileSystem.

I was pretty happy with this architecture. Maybe it was not perfect, but it seemed to work well enough for me. The game loop was handled in a base class called BaseGameApp which the game project itself would implement (what I always just called GameApp). The main() function just served to instantiate the OperatingSystem, GameWindow and GameApp classes and then called GameApp:Start() which would get the game loop running.

BaseGameApp provided access to the system objects mentioned above as you can probably imagine.

OperatingSystem* GetOperatingSystem();
GraphicsDevice* GetGraphicsDevice();
Keyboard* GetKeyboard();
Mouse* GetMouse();
Touchscreen* GetTouchscreen();
ContentManager* GetContentManager();

The ContentManager was an idea I took from XNA although I'm sure many other game frameworks use a similar approach. The content manager object took care of caching of loaded assets and loading them if they were not already when assets were requested.

Texture* texture = GetContentManager()->Get<Texture>("assets://textures/mytexture.png");

The above snippet would return a Texture* for the texture in the specified file. If it was not yet loaded it would be right then and there. Thusly, asset file paths were used as a unique identifier. In hindsight perhaps using numeric IDs to identify assets via the ContentManager might have been better. At any rate, what I ended up doing in an attempt to simplify re-use of common assets was to have a singleton class ContentCache that had methods like this on it:

Texture* GetUISkin();
Texture* GetGamePadButtons();
SpriteFont* GetTitleFont();
SpriteFont* GetFont();
SpriteFont* GetSmallFont();
SpriteFont* GetUIFont();
TextureAtlas* GetEnvironment();
TextureAtlas* GetParticles();
TextureAtlas* GetItems();
Texture* GetShadow();

Looking at it now, I feel like the addition of ContentCache highlighted perfectly how ContentManager was not providing the right level of abstraction and I probably should have merged the two. ContentManager just ended up being a fancy general-purpose thing which I did not really need. At least, not in that fully general-purpose way (I definitely did need the caching and re-use of loaded assets obviously).

Anyway, back to the game loop, which was implemented via callbacks in GameApp that were called every frame. Additionally, other important system event callbacks could be invoked for certain things, such as the window being minimized or the app running on the mobile device being paused.

void OnAppGainFocus();
void OnAppLostFocus();
void OnAppPause();
void OnAppResume();
bool OnInit();
void OnLoadGame();
void OnLostContext();
void OnNewContext();
void OnRender();
void OnResize();
void OnUpdate(float delta);

At the time I remember that this style of game loop, implemented via callbacks (in this case, the main callbacks being OnUpdate() and OnRender() of course) seeming to be "obvious" to me in a "well, duh, why wouldn't you do it like this" kind of way. I had been doing game projects (such as the aforementioned Nintendo DS project) in this style for a while already. Nowadays, I'm not so certain. It might be because of my recent return to "old-school coding" with MS-DOS, but I feel like today I would prefer (if at all possible) to just put everything into a simple while(true) { ... } type of loop. No fully separated update and render operations. I would still separate update and render calls so that updates generally all happen before rendering each frame as that only makes sense. But I feel like sometimes there are some grey areas and having the full, forced, separation makes you sometimes do silly things to work around the 1-5% of the time you need to do something weird. Plus I guess that I just like the simplicity of a simple loop. Meh.

However, one plus of using this game app object callbacks architecture was that it made it pretty simple to plug in a game state/process system which still allowed each state and process to react to the very same types of callbacks/events.

The idea with the game state/process system was that in a typical game there are multiple different states where the player needs to interact with the game in a very different way. For example, they launch the game, and a main menu appears. They make some choices and then the game starts. Now they are playing the game. They can pause the game which maybe brings up a different menu. Or they could open some status/inventory display during game play which requires them to interact with a UI of some sort.

Each of these things can be represented by a different state, and all of the states together are represented by a stack. And that is exactly what StateManager and GameState are for. StateManager has a bunch of useful methods for manipulating the stack of game states.

template<class T> T* Push();
template<class T> T* Push(const stl::string &name);
template<class T> T* Overlay();
template<class T> T* Overlay(const stl::string &name);
template<class T> T* SwapTopWith();
template<class T> T* SwapTopWith(const stl::string &name);
template<class T> T* SwapTopNonOverlayWith();
template<class T> T* SwapTopNonOverlayWith(const stl::string &name);
void Pop();
void PopTopNonOverlay();

The reason for the use of templates in most of these methods is so that the class type of the game state is passed but not the actual object instance itself since I guess I didn't want the caller to have to instantiate the object itself (I honestly don't recall my reasoning for that decision now).


The code in StateManager always felt overly complex to me but I do feel like the end result worked quite well. In addition to normal game states that could be pushed/popped or swapped, you could also use "overlay states." This essentially let you have the current "main" state running in the background while overlaying another state over it to allow some limited processing to continue happening in the "main" state, but where the overlay state was the one that was really the current state. The best example of this is where the player is in game and they open their inventory. You might want to overlay a menu UI over-top of the game screen which still continues to be rendered and even maybe with animations still continuing in the background, but you want to pause game logic (so enemies cannot attack the player while they are in their inventory, for example). This type of thing could be implemented by having the inventory game state be an "overlay state."

GameState objects implemented all the same basic callbacks/events that GameApp had in addition to having a few extras to allow them to react to manipulation of the game state stack.

void OnPush();
void OnPop();
void OnPause(BOOL dueToOverlay);
void OnResume(BOOL fromOverlay);
BOOL OnTransition(float delta, BOOL isTransitioningOut, BOOL started);

OnTransition() in particular was neat because it allowed game states to very simply do things like implement their own fading in/out or similar type of effects.

Now, game states were not by themselves enough. At least I didn't think so at the time. Very often you may want to do multiple things each frame while a game state is running and they must run together. The best example of this I think is during a GamePlayState where the player has full control over the gameplay, you probably also want to show some kind of game status (e.g. score, lives, health, etc). You could obviously just call whatever code to show this in your GamePlayState or you could do what I did and have each game state have it's own ProcessManager which containers a list of GameProcesses, and then have a StatusUI process do the game status display.

ProcessManager worked quite similarly to StateManager except that instead of a stack of processes, it was just a list. Each GameProcess added was always processed each frame in sequence.

Circling back to what I was talking about earlier with regards to the game loop and the pros/cons of implemented via some really simple while(true) { ... } with all the code right there in the body of the loop, or by doing it how I described here using callbacks in GameApp... I think that (at least to me) my StateManager and ProcessManager setup made a pretty strong case for sticking with this approach. In fact I liked this architecture so much that I kept it for the next game project I worked on in 2014, even porting it all to Java/libGDX in the process.

Another really important thing for a game to have is some sort of events system. I'm not actually talking about events from the operating system or input devices, etc, but game logic events. Such as when the timer runs out, the player gets hurt or even dies, the player picks up an item, etc. My framework had support for this too, and this was another aspect that I think worked well in practice but had some little things to cleanup in the future maybe.

In my event system, classes can subscribe to events by implementing EventListener and then registering themselves with the EventManager (which was managed by BaseGameApp). I can remember it was very important to me that game states and processes (and, well, really anything else) should be able to subscribe to events via code like the following:


And then unsubscribe from events via StopListeningFor() called in a very similar way. And this is exactly the way it worked. Event handling happened in a Handle() method that any class implementing EventListener would need to provide.

BOOL GamePlayState::Handle(const Event *event)
    if (event->Is<QuitGameEvent>())
        // do stuff
    else if (event->Is<IntroFinishedEvent>())
        // do other stuff
    // ...

It was a very minor bit of syntax sugar, but I really remember the use of templates here to provide the information about what type of event to subscribe to, or here with the Is() method so it knows what to compare against, was a really important feature I wanted the API to have. *shrug* Looking back at it now, I do have to admit that I quite like the way it reads so I guess it was worth the effort.

Events were used quite a bit in Monster Defense. I had defined 35 different event types at the time I stopped working on the project. Definitely there would have been more had it continued. There were events for things like animation updates, player buffs, entity state changes, healing, damage, movement, score, weapon switching, timers, and other things.

Which brings us to entity management. I remember at this time there had been a number of articles written on the subject of "entity/component systems." I was initially intrigued by the idea as I was not liking the idea of going with some object hierarchy approach. I remember reading this article (and a number of others by the same author) and really liking the approach but that it clearly seemed to be missing some necessary extras and what I deemed "basic functionality." As well, I didn't like the idea of using ID's to represent entities and figured I'd instead just use an Entity object (but purely as a code convenience... it still would not be meant to contain any data directly).

class Entity
    Entity(EntityManager *entityManager);
    virtual ~Entity();
    template<class T> T* Get() const;
    template<class T> T* Add();
    template<class T> void Remove();
    template<class T> BOOL Has() const;
    template<class T> BOOL WasCreatedUsingPreset() const;
    BOOL WasCreatedUsingPreset(ENTITYPRESET_TYPE type) const;

The Get(), Add(), Remove() and Has() methods all operated on Component objects which you attached to entities and these were the things that contained the actual data that belonged to the entity itself.

class Component
    virtual ~Component();
    virtual void Reset();

As you can see from this, a base Component is incredibly simple and itself contains no data. The specific component types would actually have the data defined. For example, a PositionComponent to hold an entity's position in the game:

class PositionComponent : public Component
    void Reset();
    // the actual data this component holds
    Vector3 position;

ComponentSystem objects are registered with the EntityManager. Then each time during the game's OnUpdate() and OnRender() callbacks, each component system is called in turn.

When a component system is called to do it's processing (either during OnUpdate() or OnRender()), it queries the EntityManager for a list of all entities that contain a component it cares about. For example, the PhysicsSystem would fetch a list of entities that currently have a PhysicsComponent.

void PhysicsSystem::OnUpdate(float delta)
    EntityList list;

    for (EntityList::iterator i = list.begin(); i != list.end(); ++i)
        Entity *entity = *i;
        PhysicsComponent *physics = entity->Get<PhysicsComponent>();

        // do stuff

Generally, component systems would do their main processing loop based on entities having one "main" component type that that component system cared about, but component systems could very well retrieve other components from a given entity if needed (for example, the PhysicsComponent will need access to the entity's position via the PositionComponent in addition to whatever current physics state is present in its PhysicsComponent).

The entity system and event system also were integrated. Component systems were also EventListeners. For example, the PhysicsSystem would respond to JumpEvent and MoveForwardEvent amongst others. These types of events were EntityEvent subclasses which required an Entity object to be set so that the thing that listened to those events (almost always a ComponentSystem) would manipulate the source entity in response to the event.

MoveForwardEvent *moveEvent = new MoveForwardEvent(entity);

Most of the entities that were created and managed in Monster Defense fit into the same categories and were re-used a lot. For example, there were a few types of zombie monsters that were constantly re-used. Same for power-up objects that the player could pick up. To handle this standardized creation of new entities, the addition of new entities via EntityManager needed to be passed off to an EntityPreset.

An EntityPreset had a very important Create() method which would be called by EntityManager when an entity was being Add()ed. It was then up to that Create() method to actually add the entity to the EntityManager and then return it. The MonsterPreset and ZombiePreset together give you an idea of how much manual component adding and manipulation was required to create an entity in Monster Defense (ZombiePreset is a subclass of MonsterPreset so when a zombie monster entity is added, both of their Create()'s get called).

Entity* MonsterPreset::Create(EntityPresetArgs *args)
    Entity *entity = GetEntityManager()->Add();
    entity->Add<ColorComponent>()->color = COLOR_WHITE;
    return entity;

Entity* ZombiePreset::Create(EntityPresetArgs *args)
    Entity *entity = MonsterPreset::Create(args);
    ContentManagerComponent *content = GetEntityManager()->GetGlobalComponent<ContentManagerComponent>();
    KeyframeMesh *mesh = content->content->Get<KeyframeMesh>("assets://characters/zombie.mesh");
        ->AddSequence("idle", *mesh->GetAnimation("IDLE"), 0.05f)
        ->AddSequence("walk", *mesh->GetAnimation("WALK"), 0.02f)
        ->AddSequence("dead", *mesh->GetAnimation("KILLED"), 0.10f)
        ->AddSequence("punch", *mesh->GetAnimation("BASH_LEFT_2"), 0.1f);
        ->Set(ENTITYSTATE_IDLE, "idle")
        ->Set(ENTITYSTATE_WALKING, "walk")
        ->Set(ENTITYSTATE_DEAD, "dead", FALSE, TRUE)
        ->Set(ENTITYSTATE_PUNCHING, "punch", FALSE, TRUE);
        ->bounds.radius = 0.49f;

    PhysicsComponent *physics = entity->Get<PhysicsComponent>();
    physics->friction = FRICTION_NORMAL;
    physics->maxWalkSpeed = 6.0f;
    physics->walkingAcceleration = 4.0f;
        ->offset = Vector3(0.0f, -0.5f, 0.0f);

    KeyframeMeshComponent *keyframeMesh = entity->Add<KeyframeMeshComponent>();
    keyframeMesh->mesh = mesh;

    CanBeAttackedComponent *canBeAttacked = entity->Add<CanBeAttackedComponent>();
    canBeAttacked->byPlayer = TRUE;
    canBeAttacked->byNPC = TRUE;
    return entity;

Monster Defense had 65 different component types. Some of these were complex (like PhysicsComponent) and others served no purpose other then as a simple "marker" or type of glorified flag on the entity (such as AffectedByGravityComponent).

Now, a decision I am in hindsight a bit bothered by was that EntityPresets also sometimes subscribed to events and did processing on EntityEvents only when the entity in question was created by that same EntityPreset (when an entity was added using an EntityPreset the EntityManager would automatically attach an EntityPresetComponent to that entity to mark the EntityPreset type that was used to create it). Anyway, EntityPresets subscribed to stuff like DespawnedEvent, HealEvent, HurtEvent and KilledEvent so that common functionality that should occur for entities when these events occurred could be customized per entity-type. For example, when a bullet entity dies (KilledEvent) different things should happen then when a zombie entity dies. In the bullet's case, some particles should be created where it was last located at and nothing else. When a zombie dies, some smoke particles are created and a random chance for a power-up entity to be created needs to happen.

Looking back at this, I like the idea of the EntityPreset::Create() but I think that the other additions to EntityPresets made it a bit too complicated in practice. It's hard to put my finger on it exactly though, but I know if I were to start a new project today using this same entity/component system, I would probably spend a good long while thinking about a better way of organizing this.

However, in general I really liked the whole entity/component system concept. It felt very powerful to me. Especially with the physics system I had, I liked how easy it was to add full physics processing to basically anything. Added some smoke particles and didn't like them flying through solid walls? Fixed by adding a PhysicsComponent to it. Want to attach a "blob" (stupid simple black circle) shadow to some entity? Add a BlobShadowComponent. Want to make it so that one entity can push other entities around? Add a PusherComponent.

That all being said, I do feel like this also made the code feel a little bit more like spaghetti in some ways. It could be a bit harder to fully figure out how an entity was going to work once all it's components were added because the code for fully processing that entity was now spread out over (at most) 19 different component systems.

I won't be able to talk about every entity subsystem (of which there were many), but I do want to talk about the physics system a bit more because I spent so much freaking time on it. I think this took up at least half of my development time over the entire project, but it is hard to say for sure.

I remember initially investigating things like Bullet Physics and Open Dynamics Engine (ODE) and coming away from each of these feeling incredibly stupid. I never did particularly well in math 'nor physics classes in high school. I was at best a 'C' student in both subjects overall. Some years I did better then others. Of course, this is why things like the aforementioned two physics libraries exist... to make it possible for math "dummies" like myself to not have to fret over the scary details as much. Note the emphasized part of that last sentence. You do still need to know some stuff. Anyway, personally I felt like the documentation and examples for both of these libraries just flew over my head and left me almost clueless. I remember having a weak moment after investigating both of these and coming away empty handed where I almost decided that I really should have gone with something like Unity 3D instead from the very beginning ("Things like this must by one of the major reasons why these frameworks are so popular! I cannot possibly be the only one so stupid!"). I remember reading discussions at the time how the Bullet documentation was not the greatest so at least I didn't feel totally alone there.

Anyway, I decided to pick up a prototype I had worked on a few years earlier for a physics system that was based on the old Peroxide "Improved Collision detection and Response" article from 2003. I had it half working back then and figured I might see if I could fix the remaining problems and integrate it into my engine here for Monster Defense. I think after about a month of futzing about with this I became a victim of the sunk cost fallacy, unable to pull myself away from this and admit defeat. I just had to get it working.

I rather "enjoyed" spending lots of time working on problems with ramps. Of course, the single final level included in Monster Defense doesn't even have ramps in it, but hey, let's just ignore that fact. In particular I remember there was some bug that would only occur sometimes, where an entity that climbed up to the top of the ramp could suddenly disappear (in reality, they would be launched either upward or downward – I forgot which – at incredible velocity). This was due to some floating point inaccuracy that could cause a calculation to result in NaN and propagate into other calculations and cause mayhem. Took me a looong time to narrow that down.

Lots and lots of time was spent tweaking the "feel" of the physics system too, from the player's perspective. And I still don't think it's tweaked quite right. I remember reading something from Twitter at the time where an indie developer was saying something like how typically when a game's physics system feels just right, it's usually not the most accurate (real-world accurate that is). Like it's kind of "cartoon physics" in a way or something. I really agree with this. I definitely never nailed this either, heh. A lot of my time was spent on jump physics. Things that you never thing of, like how it should react when you jump up and hit a flat surface. Or you jump up and hit the corner of a ceiling edge. Or when you jump up and just as you're falling you catch the edge/lip of the top of a wall (there's a lingering "bug" in Monster Defense where you can double-jump in this scenario if you time it right, but the single level in it doesn't give you any opportunity to test this).

Test level for working on ramps, steps and other physics bugs.

One thing I did learn was that debugging physics problems by overlaying wireframe geometry onto the display was incredibly helpful. Looking at tons of numbers (so many you have to use 2 letter labels as in the above screenshot) is difficult. As well, stepping through with a debugger is not always the most helpful either.

The game world was represented using 3D tiles. Kind of Minecraft-inspired, where the world is subdivided into 16x16x16 TileChunks which are all part of a larger TileMap. Each grid component in the map being represented by a Tile which had a number of different properties. Special, optimized processing was present for Tiles which were just simple cubes. But it was also possible for tiles to be represented by arbitrary 3D meshes (ramps could be done in this way for example).

TileChunks needed to be turned into 3D meshes with a ChunkRenderer of which there was also an optional subclass, LitChunkVertexGenerator which could apply lighting effects, based on tiles that were defined to be light sources and would "spread" light to adjacent tiles (if there was empty space for light to pass though, or a tile was marked as not solid).

The full TileMap could be rendered by simply looping through all of the TileChunks in the map and rendering their pre-generated 3D meshes (which would each just be a VBO).

The use of a 3D tilemap for the game world made integration with the physics engine easier. Scanning the world for nearby world geometry to test for potential collisions with was made incredibly simple due to this. Once I did solve the seemingly insurmountable task of getting the physics engine to work decently enough, it really did feel like it all came together well. I remember when I finally got it all working, I just spent a day moving around one of my test worlds, running past the same set of walls and ramps and whatever other objects again and again, marveling at the fact that it actually worked. If I had to choose one single thing in my "programming career" that I worked on to date that gave me the most satisfaction or that I was most proud of, it would be this thing. Not this project (Monster Defense), but just getting this physics engine working as well as it is now, despite whatever bugs remain. I truthfully did not even think I would end up getting it working this well when I first began working on it.

There are a bunch of other things in Monster Defense that I could talk about in detail but nothing else feels like it is super worthy of writing lots of words about. One thing I feel that is missing that probably would have been added had I continued working on it was some sort of scripting integration, probably with Lua in particular since it's fairly easy to integrate with C. I think that using it to define entity behaviours and even creation of entities (versus the previously described EntityPreset system) would probably have been the best approach. Assuming of course that I could have arrived at a decent abstraction through Lua for interacting with the entity/component system. I suspect that adding/manipulating entity components through simple maps would have been the best way and is very likely something I will explore in future projects.

The last thing I think I will talk about is the graphics and math engine used in Monster Defense. There was a fair bit of inspiration taken from XNA here.

A GraphicsDevice is used to manage an OpenGL context, and allows Texture, Shader, VertexBuffer and IndexBuffer objects to be bound.

The integration between Shaders, VertexBuffers and IndexBuffers is something I was pretty proud of at the time, but am not sure how well it might hold up today (I've not really kept up to date on graphics APIs so I cannot say honestly).

VertexBuffer and IndexBuffer objects are both subclasses of BufferObject which handles the OpenGL details of VBO's or client-side buffers. Each of these two buffer subclasses had nice, easy to use methods for programmatically manipulating the buffer's contents, such as Move(relativeOffset), MoveTo(absoluteOffset), MoveNext(), MovePrevious(), SetCurrentPosition3(Vector3), GetCurrentPosition3(), etc. If the buffer was initially set to be backed by a VBO, using these "setter" methods would set a dirty flag and the next time the buffer was to be rendered by GraphicsDevice it would know to upload the buffer's data via OpenGL. A lot of this convenience functionality was definitely not efficient, but damned if it wasn't convenient. For pure performance, you obviously could just pre-set the buffer data and upload it once and then render it many times after that.

Example usage, initializing a VertexBuffer (part of some code to set up a wireframe grid mesh).

VERTEX_ATTRIBS attribs[] = {

points = new VertexBuffer();
points->Initialize(attribs, 2, width * 2 + 2, BUFFEROBJECT_USAGE_STATIC);

for (uint i = 0; i < height + 1; ++i)
    points->SetPosition3((i * 2), -(width / 2.0f), 0.0f, i - (height / 2.0f));
    points->SetColor((i * 2), 1.0f, 1.0f, 1.0f);
    points->SetPosition3((i * 2) + 1, width / 2.0f, 0.0f, i - (height / 2.0f));
    points->SetColor((i * 2) + 1, 1.0f, 1.0f, 1.0f);

And then rendering via GraphicsDevice

SimpleColorShader *colorShader = graphicsDevice->GetSimpleColorShader();

graphicsDevice->Clear(0.25f, 0.5f, 1.0f, 1.0f); 


graphicsDevice->RenderLines(0, points->GetNumElements() / 2);


You can see here a Shader is used (a preset Shader instance in this case, one of several that the GraphicsDevice provides, but custom shaders can be used too). Missing is some of the boilerplate that you might see in other OpenGL code to map vertex attribute to the shader (like the position and color attributes we previously set up above). This is handled automatically through the VertexBuffers use of "standard vertex attributes" which Shader objects are aware of and know how to automatically bind. This is accomplished per-shader due to the nature of the actual GLES shader source code being able to use completely arbitrary names for the attributes. But it's still easy to set up, like in the constructor of the SimpleColorShader:

    : StandardShader()
    BOOL result = LoadCompileAndLinkInlineSources(m_vertexShaderSource, m_fragmentShaderSource);
    ASSERT(result == TRUE);

    MapAttributeToStandardAttribType("a_position", VERTEX_POS_3D);
    MapAttributeToStandardAttribType("a_color", VERTEX_COLOR);

Where a_position and a_color are the actual attribute names used in the GLES source code.

An additional convenience that Shader objects provided was the ability to let you pre-set uniform values (even before the Shader is bound). If a uniform value is set on a shader object, it is simply cached, and the cache flushed (uniforms set via OpenGL) when the shader is bound. This is a minor thing, but it ended up being handy in a few places where I suddenly didn't need to care about this kind of thing. Probably not the best idea for performance-reasons, but I was not (and still am not) working on anything remotely cutting-edge.

While my game framework did have some limited support for skeletal animation with 3D models, it was half-baked and only ever worked with one or two test models. I fell back to vertex interpolation for animation (via good ol' Quake 2 MD2 models) as a result of giving up on trying to extract skeletal animation data out of Autodesk FBX models via their SDK. The FBX file format feels like PDF in that it is a bit of a "kitchen sink" format and as such, it hard to grasp fully for the uninitiated.

Final Thoughts

So, Monster Defense was of course never finished and probably won't be, at least not in it's current form. It was fun working on, but I am disappointed that it was never finished.

Technology-wise, some final notes:

  • While it was fun building everything myself from scratch, it ended up taking a ton of effort and this was probably the single biggest reason it was never finished.
  • Having said that, if I could it over again, I would still build everything from scratch. I learnt a lot.
  • Inefficiencies galore! I could get away with it here because I was by no means pushing any boundaries, but it's still (in my opinion) important to recognize where you are taking inefficient shortcuts. Some of these have been noted above, but one thing that still bothers me is that I was lazy about memory allocations. In particular I was all too willing to just arbitrarily new() up memory whenever I needed it.
  • Organization of the code, in hindsight after not looking at it for six years, feels messy. I find myself jumping around a lot trying to trace through how things worked.
  • I had some terrible STACK_TRACE macro I used everywhere which is a holdover from the Nintendo DS origins for a lot of this code where the debugging facilities are much more crude. I don't know why I kept this around here in Monster Defense. Also, ASSERT()ing after every allocation was a bit silly (again, another holdover from the Nintendo DS).
  • I will probably get shunned or banished as a programmer for this next point: I would probably consider using global singletons for a lot of "system"-type objects. For example, my ContentManager, StateManager, EntityManager, EventManager ... almost anything with "Manager" in the name I guess. I just don't see the value in having to pass these things around when they are actually global state. I don't see the value in hiding from this fact.

In late 2013 and through 2014 I ported much of this game framework code to Java using libGDX and put together the beginnings of a worthy successor.

Monster Arena Defense

I cannot currently release this as-is because the 3D model assets used are paid assets so throwing it up on Github would not be a good idea (I could do it with the code, but you would not be able to run it). However, I did release a sample demo of the underlying engine in 2014.

In this project, I did not resolve all my concerns with this game framework as noted in this post. But I did confirm a lot of them, heh. In particular the spaghetti code feel of a lot of the entity system architecture. It felt a lot worse in this project (but there were a lot more entity types...).

Perhaps there will be a "take 3" on this project in the future. Or I may just bring it back to MS-DOS. Maybe!

DOS/4GW Port of FTE

March 31, 2019 dos code

A long time back, hidden away in a much larger post, I indicated that I wanted to port the FTE text editor to build against the Watcom toolchain so that it ran under DOS/4GW instead of CWSDPMI which the current (old) DOS build built with DJGPP uses. The reasons for this are that FTE includes menu options for running external tools (such as compilers, debuggers, etc) but because it runs under CWSDPMI, if you try to run most things that use DOS/4GW, they will crash. Different DOS extenders shouldn't be mixed.

I finally did this today. Hooray! Well, I'm probably the only person out there who cares about this, but still, hooray!

I've uploaded the changes to a Github project which includes a binary release for anyone else who may be interested in such a thing.

In addition to being build with the Watcom toolchain, I also took the opportunity to fix the broken / un-implemented DOS support for different sized screen text modes, such as 80x50 which I consider mandatory for a text editor under DOS these days. The non-platform-specific bits of code was already set up to work with a variety of different screen sizes. However, the DOS-specific bits that existed in the project (which was intended for DJGPP) did not include support for different DOS text modes via interrupt 10h. A rather curious omission since there was a non-trivial amount of other DOS VGA work that had to be done in the code to get it running under DOS in the first place by whoever did the DJGPP port way back when. Perhaps that person only used 80x25 and decided to just leave it at that. I suppose I'll never know. Meh.

Tools Development and The Need For GUI Code

December 24, 2018 dos code

As some of the "foundational" pieces of code have been coming together, I've realized that I need to get some tools created to assist me in content creation. This would be for things from asset conversion to tilemap editors.

Asset conversion tools (e.g. going from BMP/PCX to a game-engine specific format optimized for faster loading) can really just be command-line driven which makes it simple enough to create. For more involved things such as a tilemap editor, this requires some GUI code.

I made a few tools in straight DOS using QBasic back in 1998/1999 or so with my own homebrew GUI code that was pretty terrible as I recall. But it was my very first experience writing such code (and only a few years after I first learned to program), so perhaps I should not be too hard on myself. I actually want to add a "projects" section to this website in the near future where I can post that old code just for fun, but currently I don't have it in a presentable format to share for this post.

Very soon after, I moved to Visual Basic (first v4, and then v6) and of course, that made writing such tools infinitely easier. I was actually pretty happy with the tilemap editor I wrote (from scratch entirely in a weekend):

For my current DOS projects, I once again need to revisit GUI code 20 years later. There are some libraries I could use of course, but I am going the "Do It Yourself" route (a.k.a. the "Not Invented Here Syndrome" route) for fun and for the (re-)learning experience. With that in mind, I decided I wanted to pursue an Immediate Mode GUI implementation. Probably the most well known implementation of this these days is the Dear ImGui library.

I began reading up on it in more detail, having not implemented such a system before in any project I'd worked on. This video in particular is a great introduction to it (and I believe, the original video on it) if you're not too familiar with why this method of implementing a GUI might be useful and where/when. As well, I found this tutorial series to be quite useful.

Eventually I did get something working myself:

It's far from perfect, but I'm happy with how it's coming along.

while not KeyState[KeyEsc] do begin
    event := PollEvents;
    if event <> nil then UIProcessEvent(event);


    if UIBeginMenu('File', 100) then begin
        UIMenuItem('Save As');
    if UIBeginMenu('Edit', 100) then begin
    if UIBeginMenu('Help', 100) then begin

    UIBeginWindow(50, 30, 250, 180, 'window!');

    if UIButton('button 1') then
        clicked := true;
    if UIButton('button 2') then

    UIVertScrollBar('', 20, 0, 2, value);
    UIHorizScrollBar('slider!', 0, 0, 200, value2);

    UICheckBox('checkbox', checked);
    UIListBox('list!', 0, 3, list, listScrollY, listSelection);

    if UIRadioButton('one', (radio = 1)) then radio := 1;
    if UIRadioButton('two', (radio = 2)) then radio := 2;
    if UIRadioButton('three', (radio = 3)) then radio := 3;




Some stuff doesn't work completely yet (such as the menus). As well, I'm missing two main widget types: textboxes and dropdown lists. Actually, I'm not expecting textbox widgets to be too difficult to implement. In fact, the input events system that I originally added to libDGL was originally added purely because I knew I was going to need such a thing to more easily implement such a textbox widget in the future!

However, I am relatively happy that things like automatic layout/positioning mostly works well, as well as automatic widget ID generation. Well, except that, as you can see above, there is an escape hatch for this in the form of UISetNextID that can be used when/if needed.

The idea with pretty much all of the widgets is that the widget function (e.g. UIButton, UICheckBox, etc) takes simple properties like a label string and maybe a couple other common style- or functionality-related properties as well as state variable(s) which are passed by pointer/reference to the widget. If the widget state changes, the variable(s) are updated immediately with the new state and the widget function will return a boolean to indicate that the state was changed. This boolean indicating that the "state was changed" is a bit context-dependant. For example, for a UIButton, it doesn't take in any explicit state variable itself (that is managed by the UI system in the form of widget IDs being tracked for what widget the mouse is currently hovering over, etc). But, the UIButton still returns true/false to indicate if it was clicked or not. For a UICheckBox which does require an explicit state variable to be passed to it, it returns true/false to indicate that that state variable was changed in response to a user action. This allows the application code to easily respond to the most common user interaction events for widgets with just simple if logic. Which is really, really nice actually! The previously linked video on immediate mode GUIs explains this idea really well and why it's nice for programs which do real-time rendering.

The key thing for me with this GUI code is that I do not want to create a totally generic UI framework. I only need to create something that will allow me to create some basic tools easily enough. Maybe eventually I'll want to add support for things like custom themes so that this could be used directly in an actual game project while matching the game's look and feel. Or, later on I might need more widget types. The list goes on, but I do not want to get lost in a never ending quest to get a "perfect" UI framework up and running, because well, it really would be a never ending quest and frankly, I don't need such a thing!

Right now I think I'm still going a bit too far with this by keeping support for things like tabbing between widgets. Will I actually use this feature in tools I create for myself? That remains to be seen (I suspect not, but you never know).

Some of the widgets leak implementation concerns a bit here and there which I'm not happy with. I've decided to just roll with it for now and hope that inspiration will kick in later and I'll be able to clean up these warts later on.

For example, the UIListBox widget requires two UI state variables to be provided to it. One of them, listSelection, is obviously needed (it holds the user's currently selection). The other one, listScrollY, holds the list's internal scrollbar scroll position and is kind of silly, but I didn't have any clever ideas for how to efficiently and automatically manage state of internal child widgets (the UIListBox widget internally uses a UIVertScrollBar widget).

Another example of this is in the menu code (which is mostly unimplemented as I write this). UIBeginMenu takes a second parameter which is the horizontal width of the menu to be created. This is kind of unfortunate, but I'm not exactly sure as of yet how to have this automatically figured out in a way that doesn't involve some kind of deferred rendering approach. I'm currently against the deferred rendering approach as I don't want to have to keep that kind of state around in memory somewhere... but it might be the best approach as it also solves another problem with my current approach which is that overlapping sub-menus would not render correctly. I might leave deferred rendering as a "v2.0" feature, heh.

Anyway, I'm hoping to have the UI stuff finalized in a workable state (even if it's not perfect) within the next week which will allow me to move onto creating tools for myself. Additionally, this UI code will need to be ported back to C so I can use it with libDGL eventually. I've been doing it with Turbo Pascal mainly for the reasons mentioned in my last post ... I find Turbo Pascal to be quite a bit faster for prototyping and experimentation.

Turbo Pascal

October 14, 2018 dos code

Turbo Pascal is one of those development tools from the late 80's / early 90's that I had heard of but never actually used or even seen. In fact, the Pascal programming language as a whole I can describe the same way as it pertains to myself. Until 2018.

I actually now greatly regret that my younger self in the mid 90's did not get a chance to use Turbo Pascal at that time. As I've written about before, I started out with QBasic and then moved on to C/C++, and then soon after went back to QuickBASIC 4.5 while adding assembly into the mix to do more performant graphics. The reason I ended up moving back to QuickBASIC from C/C++ was that I greatly preferred the easier/quicker edit-compile-run cycle and the simpler debugging environment (plus the inline help system in QBasic/QuickBASIC is really awesome too).

That last point is what really makes me, in 2018 looking at Turbo Pascal for the first time, think to myself: "Wow! This is what I really would've loved at that time!"

I've actually had this since early this year but only really dug into it over the summer. I originally got it after reading posts on some vintage/retro computing forums with a number of people praising Turbo Pascal as a great development tool mixing a great IDE, ease of use, inline help and advanced features (pointers, inline assembly, etc). I was intrigued and I figured that maybe it might be good to play with a bit, as I had been interested in getting into FreePascal and Lazarus a bit anyway (maybe, for various reasons which I could probably write a whole post about).

So what do I like about Turbo Pascal and why do I think my younger self would've really liked it as well? In no particular order:

  • Easy to learn language with (to me anyway) a kind of "BASIC-like" feel to it.
  • Really awesome modules/units system. This honestly feels like it was way ahead of anything else at the time. It makes code re-use between projects incredibly simple. Just copy the TPU file (compiled unit) around and reference it in your program under the uses directive to start using it. Really easy.
  • Pointers!
  • Inline assembly (though, 16-bit / 286 only). Huge deal for me, as I remember getting annoyed at debugging assembly issues in my QuickBASIC/assembly mixed language projects. The 16-bit only limitation can be worked around in a hacky way by directly writing hex opcodes into your inline assembly if needed. It amuses me that I've actually heard Turbo Pascal referred to as a "shell for assembly", referring to the fact that a lot of projects where speed really mattered would have large amounts of inline assembly everywhere. For me in 2018, if I ever work on a project that gets to that point, I'd likely just switch it over to C honestly.
  • Slightly more advanced type system than BASIC, but still quite simple. One thing I've noticed so far is that I do generally feel a bit safer writing Pascal code that uses pointers (for example) then I do with C, I think due mostly to the type system.
  • Very fast compiler! And your code is actually running compiled (even from the IDE) rather then being interpreted, as in the case of QBasic. So your code is going to be a fair bit faster then QBasic right away. One side-effect I've noticed as a result of the blazing fast compiler is that I'll often compile my code as I write it (even if I don't intend to run it quite yet) simply as a check for any syntax mistakes, etc.
  • Beginner-friendly IDE that is very fast and that allows you to immediately start writing code and running it (exactly like QBasic). Also includes debugging support that is roughly on par with QuickBASIC (but does have some extras, like inspecting register values while stepping through your inline assembly).
  • Syntax coloured highlighting in the IDE!
  • Inline help system with search and plenty of examples.
  • Run-time checks to help you debug code (which can all be turned off for additional performance).
  • The IDE runs itself under DPMI (optionally) so that your code (which always runs in real-mode) has access to all of the remaining 640k conventional memory. This is a massive improvement over QuickBASIC! I very vividly recall getting really frustrated with one of my late QuickBASIC projects which was becoming impossible to run from the IDE due to it always sucking up 200-300k of conventional memory.

I've often read from people who learnt to program as kids in the 90's that they progressed from BASIC to Pascal and then to C/C++. I can kind of see now why that progression makes sense. To me, Turbo Pascal really does feel like either a much more advanced QBasic, or a kind of C/C++ "on training wheels" (sort of).

Turbo Pascal 7 also includes some object-oriented programming features. Actually, this was introduced I think in Turbo Pascal 5 or so, but people seem to say that it wasn't until version 7 that Borland had ironed out most of the bugs. I don't see myself using any OOP features that much (if at all), for the same reasons I now stick to C. I just prefer the procedural approach.

The limitation of your code needing to run in DOS real-mode is unfortunate to me in 2018, but if anything this just enforces developing for DOS the way it really was done for most of it's life... with 640k conventional memory, heh. Of note, Borland Pascal 7 (from what I gather, is the "professional" version of Turbo Pascal) apparently did include some ability to have your code run under DPMI and also added some 32-bit / 386 assembly support. However, I've read enough bad things about Borland's DPMI support in general that I'm not particularly interested in trying it out for myself.

For my current "retro" programming projects, I don't see myself using Turbo Pascal to tackle some of my more ambitious ideas (such as the Doom-style engine I still want to try doing), but for simple 2D graphics stuff I actually think this will be an interesting alternative to C.

The quicker edit-compile-run cycle is definitely handy. It makes prototyping or otherwise just quickly experimenting with ideas much easier. On my 486, it feels like instant compile times except maybe if I'm doing a full rebuild (which still completes in maybe two seconds). Contrast that to Watcom C where even for simple changes to a single source file, you're still waiting at least several seconds (if not longer). It makes a big difference over a day spent working on a project. I guess that is why many people who do retro programming today tend to use DOSbox or something else on their modern computers. I still refuse to go down this road though, preferring to stick with completely era-appropriate software and hardware!

"Old School" Magic: The Gathering

July 11, 2018 mtg

Over the past two years I've got back into collecting and playing Magic: The Gathering a little bit.

I was first introduced to the game by some friends in my fifth grade class in 1995 (actually, it was a grade five/six split class, I was in fifth grade though). I remember we used to actually sneak in games during class. I was really intrigued by all the cards and the artwork especially. Thinking back now, I recall that the majority of these cards were from the Fourth Edition, Chronicles and Ice Age sets, which all would have been current at that time. One card that I remember one of my friends had which sticks out the most in my mind that I thought was just the coolest creature card ever:

I mean, a 7/7 flying dragon that makes your opponent discard their entire hand when they take damage from him. Wow! I really wanted one! Of course, only years later did I realize that it was really a rather poor card from a playability perspective. 8 mana total casting cost with a 3 mana upkeep? Both of these costs comprised of 3 different colours? Yeah, no thanks! By the time you had enough mana out on the table available for use, the game would probably be almost over... if it even got that far.

But then again, thinking back to how I remember our games going at the time... a lot of them really did go on for a long time! I certainly don't remember anyone at the time having optimized decks. Everyone I knew who played was younger (11-12 years old) and it was their parents who were buying them cards. As a result, assuming you even had enough cards to build a deck (60 cards or more), you were playing with what you had. It might have even been with all that you had. Which probably meant you were playing with some really "janky" stuff. Maybe even (*shudder*) a four or five colour deck! So, the idea of playing something like Nicol Bolas in your deck at the time didn't really seem so crazy as it does to me now. And no, none of us had dual lands, and certainly not any of the power nine cards!

Regardless, even though I really wanted a Nicol Bolas card for myself, it wouldn't be until 23 years later that I got one, heh.

Getting a specific card wasn't even the first problem for me at that time. Getting any cards was the problem. I didn't have any money. Heck, I didn't even know where to go to buy Magic cards in the first place. I lived on a farm near a small town out in the "middle of no-where." There were no stores around that sold such things (so far as I knew). Darn! As luck would have it, later on during that school year, one of my other friends who had moved away the previous year (but who I still would occasionally go and visit, spending the weekend at his house) gave me his collection of Magic cards! I didn't even know he had any, but I showed up one weekend, and noticed he had a bunch in his room, carelessly strewn about. I asked about them and he replied "Do you want them?" I was absolutely thrilled. I think it ended up being a little over 100 or so cards all told. Again, they ended up being all from current (at the time) sets. A majority of Fourth Edition, and a smattering of Fallen Empires, Ice Age and Chronicles. A lot of the cards were in poor condition, clearly having been played many times over asphalt at school during recess or lunch before my friend ultimately got bored of the game. Finally, I could play with my own cards!

My younger brother was also interested in the game after he saw these cards and I remember we would play at home. I don't think we had quite enough cards to build a deck for each of us (I seem to recall we were somewhat short on lands) so what we ended up doing was sharing the same deck. We would play as normal, but both would draw our cards from the same deck. At first, we didn't have a rulebook so we were playing from my recollection of the rules that I learnt from my friends at school... and even that wasn't so perfect (plus I don't think we were 100% correct in following the rules during our games at school anyway). A few things I recall us doing incorrectly: we allowed attacking one (or more) of your opponents creatures directly, there was no distinction between sorceries, instants or interrupts (and I don't even think we ever played them during the opponents turn... not sure we understood that aspect of interrupts and instants), we allowed attacking with walls, regeneration could be played from cards long-since put into the graveyard. There's probably more I'm forgetting, but if you're familiar with the actual rules of the game, that should give you an idea of our games. Also, I do distinctly remember that, in an effort to not upset the other, we wouldn't attack at all until we had run out of cards in our shared deck. At which point it would turn into a real battle-royale! Early on for our games we did attacks early in the game (as soon as a creature was in play), but due to the hodge-podge of cards in our shared deck, games would often be very one-sided, especially early on and the early attacks ended up just upsetting whoever was taking a beating so we stopped doing that. Hey, we were both young after all!

The summer after school ended that year, I remember being in the mall with my Grandmother (during a visit to her place, there was no mall within an hour's drive of my house) and her buying me a Fourth Edition starter deck (60 cards!) and a couple booster packs of Alliances at some kiosk near the food court that sold Magic cards. My brother also got some cards at some point (if I recall correctly, a Mirage starter box later that year). Our games started taking better shape, which was good because I was also playing with friends at school less and less as time went on.

I ended up staying somewhat into Magic cards into 2001/2002 or so and kept collecting here and there (in particular, I remember getting quite a lot of Mercadian Masques in 1999 and early 2000). I was in high school at the time and in 2001 I remember I discovered one of the history teachers left his room open for students to come and hang out in during lunch. There was a group of -- well, I can only describe most of them in one way -- "comic book store"-type nerds who played Magic there during lunch. I hadn't played the game with anyone other then my brother in a few years at that point, so I was excited to play against some new opponents. I "endured" it for a while, but ultimately got put off playing the game by this group. Really, just a rude bunch of players that clearly weren't there to have fun but rather, seemed to have their fun by insulting people like me who were not playing with well optimized decks and had less overall experience playing the game. So, I put the game aside from a number of years.

After finishing college in 2007, I was contacted by my friend who had originally introduced me to the game in 1995 inviting me to come to a "draft" he was organizing. I hadn't played in quite a while and the game, as I discovered when I went to this draft, had changed quite a bit in the look and feel of it. The core rules were of course mostly the same, but the look had changed to what I always felt was a more generic or even "sterile" look and the artwork on the cards generally felt a lot less inspired and seemed to lack the character or "charm" that a lot of the original cards that I remembered had.

But more then that, after attending a few such drafts over the next year or two I began to realize that the sets now were largely designed with drafting in mind. There weren't really any of the fun, imbalanced, and/or just plain weird cards that you would see in the older sets. This never really felt that fun to me, it just added to the generic/sterile feeling I was getting about the game. At some point in 2008 or 2009 I declined going to further drafts just saying that I'd basically lost interest in the game.

Fast forward to 2016. I don't really remember what made me want to look up Magic again. Probably I was looking at the box in my closet that had all my old cards in it. But I started thinking about how I enjoyed the older cards much more and wouldn't it be cool if there was some group of people out there who played strictly with these older cards? After googling a bit, I discovered that, indeed, this was actually a thing! Not an official format mind you. But heck, that's probably a good thing anyway given my dislike of where the game has gone over the past 15-20 years.

Most people interested in this format stuck with cards from the original sets released in 1993/94 which was a bit before I started playing, but since Fourth Edition and Chronicles were comprised of all reprints of cards from the original sets, it was all the same cards to me anyway. Great!

Looking up some cards on eBay and such ... wow, Magic cards sure are expensive! Especially the older cards! However, I proceeded forward and ultimately, much of my disposable income in 2016 went into buying older Magic cards. Eventually, I was able to piece together quite a collection of cards that fit into the much more strict Swedish rules for "Old School 93/94" Magic, which is the ruleset that I had decided at the time that I was going to build for (mainly because I largely consider the much cheaper Revised edition cards to look somewhat ugly).

Currently, I am happy that I've been able to build three separate decks for this format:

They are most certainly not the best most optimized decks out there, and my ability to actually play the game is still rather limited due to not getting much experience at it, but even still, when I do get to play I enjoy it. This era of Magic was just a lot more fun to me and I attribute that (and only that) to my regaining interest in the game over the past two years.

Unfortunately for me, while there seems to have been a much bigger "Old School" Magic community here in Toronto years back (see here and here), it has diminished drastically since then. I currently know of only one other person in Toronto who plays and one other in Hamilton. I meet up with the person from Toronto every other month or so for some games and we have fun, but even still, it doesn't scratch my itch to play a bit more.

Thankfully this "lack of players" problem seems to be common, which you can imagine for what is a very niche (and especially now in 2018, prohibitively expensive) format of the game. So people in the online community from around the world have started playing games over Skype and other webcam-enabled methods of communication. I've only played one game this way so far, and wasn't sure what to expect exactly (I was imagining lots of connectivity and audio issues, as that's how almost every Google Hangouts session I've ever been in has gone), but it went better then I expected and I'm looking forward to playing more this way! It's nice that in 2018 there exists another way to connect all these people who enjoy this particular format of the game.

Recently, I figured that since my introduction to the game was largely via the Fourth Edition set originally released 23 years ago in 1995, that I would treat myself to some sealed old stock of this set. Buying any sealed old stock of Magic and opening it is guaranteed to lose you money, so one cannot treat it as an investment, but rather as an indulgence into nostalgia.

I suppose it's important to point out that, with my goal of building my "old school" decks within the boundaries of the stricter Swedish rules, using Fourth Edition cards in my decks is not possible. Unless I decide to follow the more lenient Eternal Central rules, which I may do at some point given the way that prices are going up recently... At any rate, I bought this just for fun, nothing else!

One of the two player "gift boxes" for Fourth Edition. Containing two 60 card decks, a rule book (a slightly bigger one then you'd get ordinarily with a starter deck box), and a (in my opinion) kind of nice flannel bag with glass counters. There was also a mostly equivalent gift box set for Revised Edition, but it's far more expensive due to the possibility of it having dual lands (which are expensive cards).

The bag holding the glass counters had broken open on it's own somehow over the past 23 years and they were scattered in the box when I opened it, but no harm was done. It's kind of funny seeing the mail-in response cards. A part of me wants to try sending it in, but I think I'll just leave it here in the box for completeness. The little black flannel bag to hold the counters is a bit nicer then I expected. Not super great quality or anything, but after seeing this now, I think I'm definitely going to use it with these glass counters for all my games going forward. As I understand it, the counters were intended to be used to track the the player's life during a game. However, nowadays there are mobile apps that do this task just as well (if not better). Instead, using these to track tokens and counters as needed for cards during a game seems a much better use to me.

But let's get on to the sealed decks. As per the description on the back of the box shown above, "this box contains everything two people need to play", so what will these decks actually look like?

Heh. So, if you've ever opened a Magic starter deck box (60 cards), then you'll instantly know that despite the fact that these two "decks" were packaged differently in a "two player" set ... this gift box just contains two normal starter decks. That is, 60 randomly assorted cards with lands, and the usual amount of rares, uncommons and commons. They're not specially prepared into anything even remotely resembling what one could consider to be a playable deck.

Well, actually now let me think about that, heh. If I think back to how I used to play this game with my brother back in the mid-90's (described above in this post) then actually for me, these two "decks" here are actually quite reminiscent of how we played! Lots of random cards with no theme or strategy. Just a "play with what you have" kind of feel to it. Now, if you were a player who had money and was not a young kid just getting into the game and had the knowledge to construct an optimized deck, then no, these two decks most certainly are not "playable" to you.

Aside from all of that, it brought a big smile to my face shuffling through these cards. It's always fun to see a Craw Wurm. Who doesn't like big creatures. And 6/4 was pretty damn big! With the way my brother and I played back in the day, Howl from Beyond was a very strong card given that we typically left our attacks until the end of the game (at which point you had a lot of lands out and could hugely buff one of your attackers... and at that point, we only had one Howl from Beyond, so it often was a game winner). I remember we both never really thought much of Erg Raiders ("why would you want to play a card that hurts you every turn you don't attack with it!?"). Always nice to see a Lightning Bolt of course. Terror was a card that we also had only one or two of back then, and it was something we loved drawing and using to instantly kill something of the others. Cards like Holy Armor and Firebreathing were also quite sought-after for us during games, as anything that buffed your creatures was good for our typical end-of-game battle-royale-style attacks. For me now in 2018, seeing Greed, Hypnotic Specter, Fellwar Stone, Strip Mine, Millstone, Power Surge amongst others are also all quite nice.

Fourth Edition, and to a little bit of a lesser extent, Ice Age will always remain my favourite sets of Magic. Not just because it's what I started playing with, but also because I always thought that the cards looked their best (highly subjective opinion of course) at this early point in the game's life while still retaining the majority of the original cards and artwork (in the case of Fourth Edition anyway). Chronicles also continues that, though with a smaller pool of cards. Revised edition, as I mentioned earlier, looked rather ugly to me. Alpha, Beta and Unlimited edition cards look rather nice (I prefer Beta to the more rounded corners of Alpha), but to me there was always a certain ... crudeness (?)... to them. I'm not sure if that's really the right word honestly. Perhaps it is, but I feel someone will read that and get an exaggerated impression of what I mean. They were of course the earliest editions (as evidenced by the names "alpha" and "beta", heh), but they did always have a little tiny bit of an unpolished feel to me. However, that feeling could largely be because I didn't see these editions until after I was already accustomed to Fourth Edition, Chronicles, and beyond. I can see how these simpler looking cards from the earliest editions would be more appealing to some. And to be clear, I am not saying I dislike them by any means. Quite the contrary as I covet my existing collection of Alpha/Beta/Unlimited cards!

(Just ignore the blatantly obvious centering issue on the Fourth Edition card on the bottom right, heh. That kind of thing happens in any edition.)

I suppose this all just helps to demonstrate how our early encounters with things colour our perceptions of it later on.