> Ogre is not a game engine. Unreal, Unity, CryEngine, Frosbite are game engines.
Including the editor tools and asset pipeline into the term "game engine" has only really become fashionable after Unity became so incredible popular, from that moment on, Unity and 'game engine' became synomymous and everybody else (including Unreal) copied the Unity workflow model. Most of the team works in the integrated editor tool in Unity most of the time, thus the editor tool has become synonymous with "engine" but it is actually the "factory line". But this integrated game development model isn't the only possible model just because it has become so popular.
It's just as well possible to create a mainly "programmer driven" game engine where tools are only used to provide the asset data, but the actual game is fully built in code (IMHO it would be much less confusing if the term 'engine' would only be used for the runtime parts that actually 'drive' the game).
That's not the point. You've got code responsible for audio, asset loading and streaming (which tends to be ad-hoc in "engines" that focus on rendering), UI, state loading and saving, visibility query DB, nav meshes, and a lot of system and gameplay logic further divided into unique modules (character management, damage management, etc.) Yes, editor is a must these days (and has been for over 20 years) but that's not the point DeathArrow was making.
What if your game embeds all assets in the executable as runtime structures instead of loading files? What if savegames are just checkpoints instead of serialized game state? What if you render so little data that you don't need complex visibility checks, etc etc etc...
Simple 2D games don't need all the runtime components that a triple-A game needs, yet a game engine for 2D games is just as much a game engine as an engine used for triple-A games.
This is a good time to point out we're really arguing over the definition of a word here, and can probably agree on a lot of facts and observations, even if we don't agree on what the definition of "game engine" should be.
Exactly, the deeper you dig, the more the "engine" just disolves into lines of code. Where does the engine HAL end and where do operating system services begin? Is libcurl considered part of the engine? Maybe if it's linked to the engine code? But what if the curl libs are part of the OS installation?
And on the opposite end, if the Unity editor is considered part of the engine, why not Visual Studio, Photoshop and Maya/3dsMax/Blender too?
In the end, the engine is just that fuzzy layer between the high level gameplay code and the operating system (going with my own preference that "engine" should only be used for the runtime parts).
Engines are essentially loops. There are many examples of engines in software and hardware. Processors, virtual machines, emulators: they are execution engines looping over instructions that they decode and execute. Game engines are the same: they loop over user input, logic and audivisual output. Every game will have this loop deep inside, no matter how it was programmed. It is essential to their operation.
Modern game engines are generalizations of that loop. People took the common functionality from successful games and separated it from the game-specific code.
It really depends on the games you are making. Not every game needs nav meshes for example.
For things like audio and asset loading, you can get really far building on top of a library like SDL2 or GLFW as a cross-platform system interface, and using some single-header libraries for things like loading models or playing audio samples. There are also great physics libraries out there like Chipmunk or Bullet which neatly encapsulate most of the "hard stuff" in building a game engine.
For something like a 2D platformer for example, it's perfectly reasonable and easier than ever for a solo developer to take on building a small custom engine.
Honestly Unity is now mainstream, but I hate it. It is not the game engine, it is a game engine. One of many ways of doing thingи, and very from what I find ideal for my workflows.
In many ways game development seems decades behind other types of software development in terms of workflows. Virtually every other discipline in CS has moved towards loosely-coupled, composable components, but game development is still mostly happening against proprietary monoliths.
Is it really because game development is "decades behind", or just because games are among the few pieces of software still developed against hard performance constraints?
Loosely-coupled composable components may be great for producing software quickly, but the approach is basically antithetical to performance.
I don't think that's true. If anything a tightly-coupled monolith can hurt performance.
For instance have you ever read through the movement code in Unreal? There are a million branching cases for every kind of movement: on particular slopes, swimming, flying etc. If your game doesn't have underwater motion, you're still executing those branches just because other games might need it.
General purpose game engines are by necessity not optimized for performance. They're optimized for being able to support every arbitrary type of game.
Many game devs, myself included, would argue that loosely coupled code makes for messier and less performant code in most situations. It certainly becomes harder to reason about code once the execution flows through dozens of different functions (often in different files) rather than through a single monolithic function. This is somewhat obvious when you think about it, yet people still insist on breaking long but perfectly readable functions into tons of tiny parts for some reason.
In many game dev circles there has also been a strong push towards “Data Oriented Design” where the focus is on manipulating the memory as directly as possible, rather than creating abstractions. See this talk by Mike Acton: https://youtu.be/rX0ItVEVjHc
A general purpose game engine is more like a web browser than other software projects. And perhaps one of the few projects outside of an OS with more complexity. An engine is intended to be a place to run arbitrary software, across multiple platforms and also an environment to build it on. There is still a whole bunch of reusable components built on top as there is with the web.
It's in imperfect analogy. You could argue there's nothing less tightly coupled than web dev. There you choose whatever tools you want as long as you can get it into HTML/CSS/JS at the end of the day.
With engine-based gamedev, you buy into a monolith and that will dictate basically everything about how the project has to be built.
Standards are lovely if everyone follows and supports them but even the three main web engines don’t. And even when they’re similar enough there are significant differences in important characteristics like JS performance.
Likewise one of the things thats most fiddly and annoying in game engines is dealing with implementation of things like OpenGL that are based on standards but not actually implemented to spec across OSes and hardware.
I agree on the dangers of relying on closed-source, proprietary software but if we wanted to avoid that we could just point to Godot as our monolithic example de jour instead.
Web dev is not tightly coupled, but browser dev is, and when you do web dev you're essentially buying into a set of monoliths that have finally after 30 years come together by committee just enough so that you only have to conform to one single way of doing web dev. Javascript has quite seriously sucked for a long time and only recently got a bit better, and HTML and CSS are still trash, so don't talk to me about "dictate basically everything" as if that's not the case there, too.
Unity is the equivalent of a browser, not an app or even app framework. It just lives in a world where the competitors haven't done the committee thing to ensure interop (and never will).
Web is a standards-based, implementation independent platform. That is completely, categorically different from Unity, which is a proprietary tool for making applications.
Unity isn't the browser, it's web-flow or square-space. Except it doesn't give you artifacts which are interoperable with other tools at the end.
Game engines inevitably become monoliths because you need that level of integration between components. If you examine a modern game, you'll generally find about 3 different layers in the runtime:
- individual subsystems: aside from things like animation and graphics, many engines use externally developed libraries for sybsystems (e.g. retour for navigation or bullet for physics)
- integration layer: this comes across as a mundane glue layer between game code and the subsystems, but it is much more than that. This layer takes care of a lot of things. For example, it's often responsible that the right resources are loaded at the right time across all subsystems, makes triggers work (physics collision handler to trigger events/changes in other subsystems), makes it possible that animation timelines can act across other subsystems, triggering sounds, starting animations, altering object states, and many more things... This layer is also ultimately responsible for keeping the real time guarantees across all subsystems with all the tweaking that entails.
- game logic using the facilities provided by the integration layer
When you look at the evolution of game engines, the first ones were monoliths that had to do everything themselves because no reusable components existed (e.g. Doom, Quake). A few years later, reusable components started to appear (mostly physics and audio engines IIRC) and for a short time there were a lot of mostly proprietary engines integrating these components. But with games becoming more complex and more demanding, these integration layers also became more and more complex and became a huge investment. That's when the well integrated monoliths won.
I disagree. I don't think there's anything inherently "tightly-couply" in engines, nor the systems are as coupled as tightly as you describe.
In modern engines, the allocation and management of resources is often delegated to an individual subsystem. The fact there are no famous libraries (as famous as Bullet, Retour or BGFX) that do it is no indictment it's not decoupled in practice, it's just not a very sexy area. And even in commercial engines, this subsystem is not as smart (and complex) as one would wish.
The triggering parts you mention are handled by Components, in most popular engines. Even without using ECS or something fancy, Unity-style components are already as decoupled as it gets. Sure one component might need to know about others, but that's the nature of the programming and it's already pretty decoupled.
Even the editors like Unity are as decoupled as it gets, using reflection instead of knowing the internals of the Components in the integration layer.
The fact that most engines only come in big monolithic packages is just a reflection of current development culture, with lack of collaboration between engine writers and lack of standard patterns or formats. Virtually every single standard present in video game engines (models, video, audio, code, serialisation, APIs) come from the outside. It's a young field.
But I could perfectly envision a reality where the popular libraries offer components (or whatever) ready to be included into third-party editors, for example. Imagine VST plugins in a DAW, for example. Maybe even a standard map/level format too.
But this is not where the money is: Unity and Unreal breaking it and creating an open ecosystem of Components would be amazing, but would also open the floodgates for competition. Their "strength" is in being a single package that looks monolithic from the outside. With a plugin-is system one could create a new editor without creating the other parts, or a new renderer without creating an editor. I could see Godot doing it, maybe, but that's probably not a priority.
For instance, renderers (esp. back-ends) and physics engines are already largely subsystems which are fairly separate from the rest of the game logic.
I can imagine a world where you start with a package manager, and maybe you choose one overarching framework/ecosystem which will serve as a middleware for combining nav meshes and animations etc. in the same world space, handling events etc. the same way you might choose a framework for building a web-server from several popular options.
There's no reason you need an extremely heavy-weight closed-source black-box system to serve as the foundational layer.
This sounds nice on paper. There's just one problem: you mention one overarching framework. That's the game engine. And no matter how you slice it, it's going to be a beast.
Not exactly - I think you can have one of many frameworks serving that glue/structure role, and not all games would need it.
I also think you're over-estimating how much of a "beast" you need as a minimal structure for building a game around. I have built a lot of hobby games, and if you're not trying to make an AAA game, you can get really far with a simple game loop, and a few single-header libraries for things like audio and asset loading.
That would be one of the advantages of a more modular approach: you could choose a right-size solution rather than working from the assumption that you need this hugely complex monolith to build off of, and potentially not needing a lot of that complexity.
> I also think you're over-estimating how much of a "beast" you need as a minimal structure for building a game around. I have built a lot of hobby games, and if you're not trying to make an AAA game, you can get really far with a simple game loop
It’s an organization/people problem. Eventually if your tiny hobby game engine gets big enough people start trying to use it to make the equivalent of AAA games. Then either those users (or your investors if you’ve gone that route like Unity did), start pushing you to support those edge cases.
Eventually your elegant modular code is dwarfed by the all the edge case handling that got crammed into the framework—either because it was so performance critical that it had to live there or (the more likely case) because it was much faster to hamjam it in than spend time thinking about how best to architect it.
Idk I think you are describing a way a modular approach can fail, but I'm fairly confident you could find a way to modularize game development to a much greater degree than is currently achieved.
I think you could, but I just don’t know how long it would last. The real benefit of microservices in most cases is that the network acts as an externally enforced boundary layer.
There’s no reason the vast majority of time that you need to pay the overhead of using network calls to enforce your boundaries. Yet time an again you see companies willing to pay the microservices tax because it’s just so hard organizationally to enforce modular boundaries over time.
You can have almost all of those benefits without using network calls as your boundary. Nearly every benefit you listed is a consequence of having enforced boundaries, not of microservices specifically.
Microservices do provide technical benefits for a small monitory of companies, but for the rest of us it’s almost entirely a solution to organizational/human problems.
An npm like ecosystem sounds absolutely horrible. It’s also not what I thought you were talking about.
Many (most?) software engineers (most?) cringe at the state of the npm ecosystem—even if they hold their nose and participate in it. I definitely wouldn’t consider that an improvement to the state of game dev software engineering practices.
I think there are some problems with package managers, and with the culture around NPM in particular, but I think almost everyone would agree that simple and standardized code reuse has been a massive boon to software engineering as a whole.
Do you dislike cargo as well or just npm for some reason?
It's more a question about degree. I don't mind an app having a few well vetted external dependencies. When it becomes common and accepted for a small application to contain thousands of 3rd party dependencies, I think we've gone way too far.
I've worked in languages that had large standard libraries where it was common to only include a few large commercial external dependencies, and I've worked with javascript and ruby on the other end of the spectrum.
I don't think one style has a clear productivity advantage over the other. Other than maybe at the very lowest levels of beginning software engineers (even then I'm unsure because of the decision paralysis common for beginners in these ecosystems).
As to the original point, however, Unity and other engines already have package managers and asset/code stores. What you're essentially asking for is for Unity to remove many of the features they already include and pull them out into packages. Now you're running up against many of the organizational issues I've already talked about. Not saying it's impossible, in my experience it's just not likely to work out long term.
> Virtually every other discipline in CS has moved towards loosely-coupled, composable components, but game development is still mostly happening against proprietary monoliths.
I'd argue that's because game development is not a CS discipline. Game tech / engine is, but games need more man-hours in artistic disciplines like modeling, animation, level-design, story-telling etc. than they need for cs.
There is a lot of 'game middleware' out there for the CS part of the equation. Game engines themselves are not monoliths, using 20+ middleware components is common.
The game design part is done using a monolithic interface to the technology (the engine), but that's because that job is not 'programming'.
There's artists building the game experience and CS people building the game technology.
The latter do use and reuse smaller software components to build the level editors, scripting interfaces, particle physics, path-finding, illumination, etc. The engine. This job is very much like other fields of software development, not actually behind.
The first might write code (e.g. scripting), but they are generally doing so in a restricted 'monolithic' environment provided to make their job easier. They do not have to build the technology, they can just direct it, if that makes sense. Their job isn't typical software development, more content development, thus it seems a bit alien, behind from a software developer's perspective.
Game Design and programming are two completely different sets of abilities.
Programmers can design a game, and game designers can do some programming, of course. But in modern days, the core way that both professionals interact with engine editors is completely different.
Game Design in general is closer to art or being director than to programming.
I like to call the Unity approach a "Photoshop for games" (which can be a good or a bad thing, depending on perspective and requirements).
It's essentially a game-maker UI tool with some support for scripting/coding. The runtime that's dangling off the end of the asset pipeline and editor is much less important (even if this hurts the pride of hardcore engine coders) ;)
PS: a standalone "bring your own engine" hackable Unity editor would be great, but I guess that doesn't quit fit into Unity's business model
> PS: a standalone "bring your own engine" hackable Unity editor would be great, but I guess that doesn't quit fit into Unity's business model
I’ve been thinking about this a lot as I want to write more standalone games with minimalist runtimes but also don’t want to give up the QoL you get with a decent scene editor with runtime inspection and “fiddling”.
The most exciting thing happening in game development is the move towards ECS (Entity Component Systems). Hopefully with games like Overwatch using it (and other big names), it'll see an uptick in usage.
Unity is reworking their systems to use it by calling it "DOTS" (https://unity.com/dots), and this wouldn't be a HN post without mentioning rust so there is also bevy (https://bevyengine.org/) which is great.
ECS (or what's called "ECS" nowadays) is mainly useful if you need to deal with thousands of similar data items each frame. Not many games need that outside of specialized systems like particle systems.
What about a monolith ensures better performance optimization?
And won't general-purpose engines pay a penalty relative to single-purpose engines since they have to optimize for the "average case" game rather than exactly one target?
> And won't general-purpose engines pay a penalty relative to single-purpose engines since they have to optimize for the "average case" game rather than exactly one target?
Theoretically, yes. However so many man hours have been put into optimizing engines like Unreal that in reality the answer is no.
I don't agree with this. I mean, UE renderer is the subject of countless hours of effort and optimization. But for instance the startup time (or build time for that matter) of a UE game is quite long and it is fairly easy to beat in a bespoke engine.
That's one of the reasons looser coupling would be great. I would love to be able to use the UE renderer in the context of a different engine.
Including the editor tools and asset pipeline into the term "game engine" has only really become fashionable after Unity became so incredible popular, from that moment on, Unity and 'game engine' became synomymous and everybody else (including Unreal) copied the Unity workflow model. Most of the team works in the integrated editor tool in Unity most of the time, thus the editor tool has become synonymous with "engine" but it is actually the "factory line". But this integrated game development model isn't the only possible model just because it has become so popular.
It's just as well possible to create a mainly "programmer driven" game engine where tools are only used to provide the asset data, but the actual game is fully built in code (IMHO it would be much less confusing if the term 'engine' would only be used for the runtime parts that actually 'drive' the game).