TL;DR -> .NET 8 language support is now available for MonoGame and opening up a whole new world of goodness and speed for games.
*Note, the information contained below is for those developers who want access to the cutting edge, as it requires access to the development source. (with the exception of upgrading your game to .NET 8)
The full release of the .NET 8 MonoGame support will be included with the 3.9 release, coming soon.
MonoGame may have seemed stagnant or unmoving in the past, mainly because of the rigors of a group of developers working unpaid on the development of an OpenSource framework, meaning the focus had to be on what enabled the developers to make money in their own projects based on MonoGame. But thanks to recent investments by ReLogic and our awesome MonoGame community, the MonoGame Foundation was born and more significant investments could be made.
Granted, the MonoGame Foundation board (the majority of the core developers) are still NOT getting paid, but there is a renewed focus and understanding of what is needed to make MonoGame (and its predecessor XNA) great again. (not to say it has not always been great, of course)
So on to today’s news (a few weeks late), that one of the first items brought up at the MonoGame Foundation board meetings has now been completed, the upgrade from .NET 6 to .NET 8 for the public version of MonoGame. You can read the nitty-gritty of the change here in the PR for the update, and this article will help explain the rest.
It may also be interesting to note, that the work was NOT done by a Foundation Board member, but by one of the MonoGame community, none other than Aristurtle. Make sure to give them a virtual clap on the back when you see them on Discord!!
Initially, not much. Apart from inheriting .NET 8’s inherent speed upgrades and compilation updates, nothing has really changed. Stuff you get for free for being on the latest version of .NET is just that, free stuff.
Of note, one developer noticed a 1.5x performance boost by simply upgrading to .NET 8, everything ran smooth without any tricks or fixes. Just by changing a number!
Simply stating:
The .NET 8 upgrade of MonoGame was really worth it, it is a good release.
What this does enable however, is for MonoGame to start utilizing some of the additional features enabled with the latest and greatest .NET framework, namely:
While it will take time to fully realize some of these benefits in the core part of the MonoGame library, initially providing advantages in the Content Pipeline and the writing of some really cool Content Pipeline extensions, which are a fantastic way to fully empower your content in any MonoGame project.
But you are now free to fully utilize any and all improvements from the .NET 8 SDK without limitation (other than making sure it works on your intended platform).
One small Caveat to the announcement is that the .NET upgrade is ONLY for the public version of MonoGame. For consoles and other private areas (due to the licensing enforced by partners), the team is working hard in this release to get those updated, but that will come later.
Now, one misnomer to get over at this point, is that you can TODAY, with the release version of MonoGame, actually build a .NET 8 executable for your game, and in that, use .NET 8 features in your project. But the MonoGame Libraries are still .NET 6 and limited to the .NET 6 instruction set, so any functionality you write can only be enhanced in your code and not the base of the MonoGame Framework.
You still get some of the performance gains in your .NET project for the code you write in your game so it is worth doing!
The developer versions of the MonoGame packages are currently published on GitHub using GitHub’s own NuGet packaging service, which can be found at https://github.com/orgs/MonoGame/packages
You can download each package from here manually if you wish, but it is better to do it directly in your project, to do this however, you will also need a Personal Access Token for your client to successfully authenticate with GitHub to access the packages.
Although the packages are public, like the official NuGet servers, they are actually held behind GitHub’s authentication which needs a user account to access.
To authenticate, you need:
To create a Personal Access token, simply:
Click on the Generate new token button, select Generate new token (classic)
GitHub Packages only support authentication using a personal access token (classic). For more information, see “Managing your personal access tokens.”
As shown above, give the token a recognizable name, select the
read:packages scope
(which only allows this token to read packages, nothing else) and finally set the expiration date. (For read tokens like this, I usually set the expiration to “Never expire”, although GitHub will warn you if you do)
Your key will need to be regenerated if:
- You forget it, basically get a new one and update wherever you used it, for authentication, GitHub actions or wherever.
- The key expires and you need a fresh one.
GitHub will NEVER ask you for a key and will not send you an email requesting it, so NEVER share it and NEVER publish it to a GitHub repo (even private ones). Only use secrets online.
Now armed with your key, you can use this as your password for authenticating with GitHub to access packages stored on GitHub for repositories your account can access.
Starting with Visual Studio, as it is the simplest to do, upgrading MonoGame to .NET 8 is as simple as adding access to the GitHub NuGet package source and updating your packages.
Right-click and select properties on your Project File (not Solution file), then change the Target Framework to .NET 8.0, as shown below:
Once you have entered the details, click on the “Update” button to save the changes and you should see the updated screen below:
Now just click OK and select the new Package Source in the drop-down (if it is not selected already)
The first time you access the MonoGame developer NuGet source on GitHub, you will be asked for your authentication credentials. Simply enter your GitHub Username (or email) and your Personal Access Token (for the password) that you generated earlier to progress.
From here you should simply be able to select the installed packages and update them to the latest without issue. Congrats you are now using the .NET 8 version of the MonoGame Framework!
Next Step, jump to this section to also update the MonoGame tools, e.g. the MGCB content tool.
For Visual Studio Code, the flow is a little more manual and also a bit trickier as we no longer have a UI in which to make changes, so we need to apply the updates by hand.
Change the TargetFramework value from net6.0 to net8.0. (yay your project will now build for .NET 8)
<TargetFramework>net8.0</TargetFramework>
Next, to be able to access the GitHub Packages, we need to authenticate with GitHub to access the developer packages, open a new Terminal Window in VSCode (“Terminal -> New Terminal” or “Ctrl+Shift+’”) and then type the following:
dotnet nuget add source --username <your GitHub Username/> --password <your GitHub PAT/> --name MonoGame "https://nuget.pkg.github.com/MonoGame/index.json"
Making sure to replace the username and password with your GitHub credentials, your username and the Personal Access Token you generated.
With the authentication in place, enter the following command for each package you have installed:
.NET add package MonoGame.Content.Builder.Task --version 3.8.1.534-develop
To check the correct version to use for the packages you want to install, visit the Packages list on the GitHub repository and click on each package to see all the versions for the package and even the command-line command, as shown above.
If you now check the Solution Explorer tab in the Browser window on the left, you will see the dependency packages updated to the development versions of MonoGame (assuming your nuget.config is configured with the right credentials)
You have updated your project to .NET 8 as well as your MonoGame Framework dependencies. So, why when you do a build does it still use the old .NET 6 versions of the MonoGame Tools (like MGCB)?
The simple answer is because you are still telling it to.
To update your project to use the newer .NET 8 version of the tools, you also need to update the .NET-tools.json configuration located in your projects .config folder.
Simply edit the file in Visual Studio or VSCode and swap out the older 3.8.1.303 version number with the newer version you installed with your packages, which at the time of writing was 3.8.1.534-develop, as shown below:
{
"version": 1,
"isRoot": true,
"tools": {
".NET-mgcb": {
"version": "3.8.1.534-develop",
"commands": [
"mgcb"
]
},
".NET-mgcb-editor": {
"version": "3.8.1.534-develop",
"commands": [
"mgcb-editor"
]
},
".NET-mgcb-editor-linux": {
"version": "3.8.1.534-develop",
"commands": [
"mgcb-editor-linux"
]
},
".NET-mgcb-editor-windows": {
"version": "3.8.1.534-develop",
"commands": [
"mgcb-editor-windows"
]
},
".NET-mgcb-editor-mac": {
"version": "3.8.1.534-develop",
"commands": [
"mgcb-editor-mac"
]
}
}
}
Done, next time you perform a .NET restore
or build your project, the latest version of the tools with be downloaded and made available (they will not be available until you do your FIRST build after making the update)
*Note this is still a manual task. Every time you update the dependencies, you will also need to update the tools versions.
To the beginner, this upgrade will not mean much at all, except to state that you will be running on the latest “supported” version of Dot Net (as .NET 6 is now in maintenance only), which also means you will get all the latest fixes and updates to the backbone of your project. As well as the modest speed improvements from the latest and greatest of .NET.
For the more adventurous, newer C# features and .NET 8 specific functionality are now within your project, so much so that I couldn’t fit them into a single article :D
But you can check out the .NET 8 docs for even more details.
I wish you well on your continuing adventure with MonoGame and make sure to keep an eye on what the Foundation will promise next (but no Galactic Empire falls predicted yet if you are watching the Foundation TV series).
]]>TL;DR -> With a little effort, publishing MonoGame projects to the Web is possible, so long as you remember it is the Web and you cannot do EVERYTHING!
When it comes to GameJams, like the one mentioned in this post, pushing out the finished project as an EXE or Appx usually results in your project either getting downvoted or ignored, because who wants to infect their machine with an unknown just to test out a Jam project. This applies to most Game Engines out there that are not web-based, including MonoGame, but thankfully due to the hard work of Nikos Kastellanos (NKast) there is an option available to us.
Now you have to keep in mind, that the Web is NOT your desktop or some high-powered beast, sure, there are continual developments to make Web projects work better or support more features, but it is unlikely to match that of your modern desktops or consoles. Some things either do not work because they are too complex, others because there is insufficient support in all web browsers or devices. So long as you keep in mind the limitations (usually only found by trying to run your game in a browser), then you can truly fly.
Personally, Keep your expectations moderate. First, make your project work with the minimum, then just add more until it breaks :D. Start with the keyboard only and go from there.
BIG thanks to NKast For the amazing work with the KNI project which offers another extended way to build MonoGame projects with some additional platforms and features!
MonoGame is an awesome game development framework, made even more awesome by the growing community that surrounds it. A prime example of this is the KNI Engine made and supported by one of MonoGame’s long time supporters Nikos Kastellanos (NKast), their tireless devotion brings non-other than Web Support plus a host of other features for projects written using MonoGame.
Like MonoGame, KNI supports the Microsoft Public License for the majority of the code, with a few proprietary exceptions which are detailed in its components. KNI is free and open-source, however, maintaining and expanding the framework requires ongoing effort and resources that relies on the support of the community to continue delivering top-notch updates, features, and maintenance.
KNI Supports the same platforms that MonoGame does (because it is a fork of MonoGame) plus a few additions, which include:
Plus a few performance tweaks for KNI based projects. Worthy of checking out!
Now, unlike base MonoGame these days, KNI does require a full Visual Studio 2022 installation, mainly to support the additional Project templates that KNI provides, these are all installed by the KNI Engine installer (much like MonoGame did before it upgraded to .NET6 in its entirety):
Like MonoGame, most of the libraries behind the templates are published on NuGet, but the Project templates (like the Web project template) still require Visual Studio to create them. (Maybe with your support this can be updated in the future!).
At the time of writing, there is a Known issue with the Visual Studio templates, which is mainly Visual Studio’s fault (honest), whereby the templates may not immediately show up and require either:
- Restarting Visual Studio a “few” times.
- Require running the command
devenv /updateConfiguration
in the Visual Studio installation folder, e.g. C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE.If your User Profile is not in the default
C:\Users
folder, you might also have to check/update your Visual Studio “locations” setup and move/copy the installed templates there: Just be sure to run the abovedevenv /updateConfiguration
command AFTER moving them to update Visual Studio.
Maybe with your support, this situation can be improved. All it takes is a little support. But once you are working, you can be ready to fly.
With the templates installed and everything ready, you should have access to the new KNI templates, and for this article more importantly, the KNI Web Browser Template as shown below:
Just create your new project using this template and you are already 90% of the way there:
Running the project results in a new Web Browser being launched against your local machine, ready to test and play with:
Fantastic, so what was too hard about that?
KNI Web uses Microsoft Blazor (which uses Razor files) as the backend, which is a C# compiled Web system for building web apps. It is really useful as MonoGame is C# based and Blazor is a C# based web system, it just works. Some other Web solutions for MonoGame used a cross-compiler to turn C# into JavaScript, but Blazor is just cleaner and more efficient.
Looking into the project, you should see a few subtle differences, which we should discuss, so that you don’t change anything you should not by mistake, namely:
File | Location | Description |
---|---|---|
Program.cs | Root | A custom version of Program.cs (like most platforms) designed for Web Builds, DO NOT TOUCH :D |
KNIBrowserGame.cs | Root | This is basically your normal Game1.cs definition, hack away freely. But if you change the Class name, also update Index.razor.cs |
Index.razor | Pages | This is the Razor (web) equivalent of Program.cs, it is the initialization page for the project. It defines a renderable canvas on which to draw the game. |
Index.razor.cs | Pages (under Index.razor) | The c# code behind for the main Razor web page, this is the code initialization for the Web project |
KNIBrowserContent.mgcb | Content | The MGCB project for the Web solution, albeit, using KNI’s own MGCB editor due to “Visual Studio” |
wwwroot | Root | The deployable web folder for the project, unless you are a seasoned web dev, best not to touch this :D |
!Other files | Root | Just do not touch them, mainly Razor setup files and such |
Out of the box, you should not need to change anything and the only thing you will need to keep in sync is the NAME of your Game class and the entry point in Index.razor.cs
, in the same way we do for other MonoGame projects between the Game class and the Program class.
Here is a GIF of the GameState Management sample (with a few alterations) running using a KNI Web Build.
Now, this is the web we are targeting and there is a lot of history and patches to make modern web browsers work, so it should come as no surprise that not EVERYTHING is going to work out of the box:
To name but a few, no doubt there are more, much more. But this being an Open Source project, if you have the skills and are willing to contribute to make it better, I encourage you to do so!!.
When you do hit an issue, and more likely than not I am afraid, you will, you will see the following result when you run your project:
A pretty purple screen with a “Reload” option, which is not really that helpful on its own and Visual Studio is no help here really because you have left the confines of your debugger and entered “The Web Zone”!
Luckily, most browsers have an “F12” developer option, so by pressing “F12” you will get the not-so-friendly “Web Debugger” window, as shown below:
These outputs are more friendly to Web Developers (Well, I assume so?) full of lots of information to help you diagnose what is going on, in this case, the issue was simple:
The sample was using a GamePad for input and GamePads are not currently supported using the KNI Engine for the web.
If you are only doing a web project, just fix / remove the offending code, if you have a multi-platform project, then just encase the offending code with the following pre-compiler definition (#if):
#if !BLAZORGL
CurrentGamePadStates[i] = GamePad.GetState((PlayerIndex)i);
#endif
Which will exclude the offending lines from being included in the Web Build (just make sure things still work / compile).
Feel free to experiment and play, and for those cunning individuals who are experienced in Web Development, I encourage you to help enlighten those who are not.
Having a web build on your machine that you can view locally is all well and good, but what about everyone else? You cannot exactly ask them all to come to dinner to view your creation, what about pushing it to the web?
To get your build out there, you have a couple of options:
Thanks to GitHub actions (automation) and GitHub Pages (free static web hosting) we can manage the source of our project online and then create/update a build every time we push an update to source control.
To achieve this, assuming you have a GitHub repo setup and have pushed your source code to it, the tasks we need to complete are:
As easy as 1,2,3, honest.
Because we want our automation to publish content to GitHub pages, and write back to the repository, we need to allow it to (by default, it is turned off). To do this, navigate to “Settings -> Actions -> General -> Workflow Permissions” and set the option to “Read and write permissions” as shown below:
This allows the automation to publish on your behalf.
Enabling GitHub pages is very simple and GitHub continually works to improve the flow to make it easier and easier.
Next, you need to:
Done, there is more to do, but GitHub Pages is now setup. Now for the final and slightly trickier part, the automation workflow.
Following on from the previous step, you should now see the following default workflow setup, which is close but will not build our MonoGame project.
To get the result we want, we need to add the following:
The updated YAML should be replaced with the following:
# Simple workflow for deploying static content to GitHub Pages
name: Deploy MonoGame web project to GitHub Pages
on:
# Runs on pushes targeting the default branch
push:
branches: ["main"]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: false
jobs:
# Single deploy job since we are just deploying
deploy-to-github-pages:
environment:
name: github-pages
url: $
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup Pages
uses: actions/configure-pages@v3
- name: Setup .NET Core SDK
uses: actions/setup-dotnet@v3
with:
dotnet-version: 6.0.x
- name: Publish .NET Core Project
run: dotnet publish Platforms/Web/KNIBrowser.csproj -c Release -o release --nologo
- name: Upload artifact
uses: actions/upload-pages-artifact@v2
with:
# Upload wwwroot from publish action
path: 'release/wwwroot'
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v2
The critical elements to watch out for are:
If in doubt, run the commands locally in your project folder to make sure you get the output you expect.
Click save and “Commit to main” (or create a pull request) to activate the workflow.
WARNING this will run straight away, as the workflow is activated on checking in code. But do not worry, you have 2000 minutes of FREE time on GitHub for automations.
Providing everything is correct, you have put in the paths correctly and checked it DID actually build locally, then you should get the following result:
Clicking on the completed Action, you shall see the output of the Action, INCLUDING the URL that your build was actually published to:
Check out my build here:
One issue I did hit which actually prevented my build completing, required me to edit my KNI Engine csproj file and remove a line. The line in question was:
<Import Project="$(MSBuildExtensionsPath)\MonoGame\v3.0\MonoGame.Content.Builder17.targets" />
Rather ironically, local builds will work fine and run with this in, but automated builds will not. Without it, everything is squeaky clean. (fingers crossed)
We have come to the end of this little web journey, I hope others are encouraged and excited to ship your games or hack projects to the web to demonstrate your skills, especially if there is a GameJam going on :D.
Here is hoping this is a little light in the darkness to get you going!
]]>TL;DR -> Getting started with MonoGame is easy, mastering it takes time. So, throw all that out the window and get hacking instead!
The 5th annual MonoGameJam is kicking off soon on the 30th November 2023, which is by far one of the best ways to throw yourself into MonoGame and learn something new:
Not quite the vaulted heights of the days of DreamBuildPlay in the past, which launched the careers of so many indie devs such as Ska Studios with Dishwasher Samurai, Smudged Cat Games with The Adventures of Shuggy and even Humble Hearts with Dust, An Elysian Tale.
Bold games for a bold time, and who knows who/what could be next!
If you have been hiding under a very big rock, you might not have heard of MonoGame (unlikely, but possible). MonoGame is a game building framework that focuses on handling all the complex requirements for shipping games on multiple platforms, such as Windows, Android, Xbox, PlayStation and so on, letting you get on with just writing code/art/shaders/etc and so on (just, hah, :P)
In short, MonoGame is a C# based Game Framework (not an engine like Unity), where you write code once, then MonoGame compiles your code for multiple platforms enabling to you spread your wings faster.
Getting started with MonoGame is very easy, no matter which platform you are building from, be it Windows, Mac or Linux and the best place to get started, unsurprisingly, is the MonoGame Getting started guide:
The guides walk you through setting up your machine environment, installing the tools and the MonoGame Framework (most important) and then leading you down the basic path of writing your first game, including content.
For more details, check the MonoGameJam5 site for links, or browse the catalogue of community samples and courses available on the MonoGame website.
My personal favorite for any GameJam involving MonoGame is the GameState Management Sample, originally from Microsoft and updated for MonoGame.
The sample provides a simple game screen management system, ready to be used as a starting point for games on Windows, iOS, Android and more, complete with reusable code to manage all the screens (including transitions) you might need for a project. Even including pause and options screens. It really packs a punch.
The sample is comprised of:
Folder | Description |
---|---|
Content Project | The shared content project for all platforms, not specifically required but a good reference. |
GameStateManagement | The GameState management library, reusable in any project. |
Platforms | Sample Platform initiator code, if you are just hacking a single platform you can ignore these. |
SampleCode | The all-important example usage of the GameState Management library, including multiple screens, gameplay and more. |
Here is what it looks like:
Enough to get you started on your Hack without worrying about all that on screen “menu” stuff.
If you are thinking bigger than a simple paint program or block pushing game, you might want to add things like Physics, Saving and Loading, AI, Effects and more. Thankfully the MonoGame community has your back with a wondrous collection of resources, libraries and tools available:
I might suggest having a glance through the list below BEFORE the hack to pick your favorites, there are a LOT!
And much much more!
Like most GameJam’s, the theme is yet to be announced but that’s not to stop you getting a blank slate ready and getting some quick reading done. In this ever-expanding world of AI nothing stopping you from asking your robot overlords for help (yes, they even help with MonoGame code) in sculpting your dream, above all, have fun, learn well and put your best thumb forward!
]]>TL;DR -> It is as simple as Initialize, LoadContent, Update, Draw, UnloadContent and finish, with a little sauce in between.
Every Game, Program or process runs on a specific loop, it might go through just once, or as is in the case with Games, the loop continues until the game is over, finished, kaput, crashed or simply closed (but who really wants a Game to ever end :D). Put simply, this is known as the Game loop, and whether it is Unity, Unreal or MonoGame, they all have their own specific order of things.
This article will walk through MonoGame’s Execution event order and set out what you can and cannot override to make it your own.
Although personally, and with every MonoGame project I’ve been involved with, I have extended the Game Loop and never been brave enough to actually OVERRIDE it :D.
BIG thanks to Aristurtle for the original image, credits here! Check out his work as it is amazing in the MonoGame space!
The Full MonoGame execution order is a bit much to take in, but I put it first so you can see the full range of what is going on behind the scenes while your Game is Running.
In short, the full execution is like this:
Event | Description | Notes |
---|---|---|
Game Start | In Program.Main (or the relevant entry point for your platform) the Game is created and then uses the Run() method of the Game to start it. | |
Game Run | Initialize is called to invoke and setup all your code ready to accept any content to load. | |
LoadContent | At this point, any/all content starts to be loaded from disk into memory, ready to be rendered. | Care must be taken to load the correct assets at the right time, menu assets first, then game/level assets as the player progresses. |
Platform.Run | This is where the Game Loop is executed for a specific platform, each has its own specific requirements for how to achieve this, but ultimately all result in a loop that “Ticks” along according to the cycle of the device | Base ticks are inconsistent and can change based on how much work the device is doing, where timing is critical, FixedStep timing is used instead. |
Core Game Loop | Update and Draw are called consecutively for each “Tick” of the game. Until the game is requested to Stop. | Game Components also receive the same events as the main game |
Game End Run | All internal game processes are stopped and the Game Loop is terminated. | No game changes are permitted at this point and the process cannot be halted. |
Game Exit | Content is unloaded from memory and all classes are disposed (where supported), cleaning up memory and releasing it to finally close the game | Content can be Loaded or Unloaded at any point, these final calls unload any and all remaining content from the system. |
Game End | Game Over dude, Game over. We should just take off and nuke the site from orbit! |
There is a lot more going on behind the scenes to make the Game Loop happen on so many different platforms, from Graphics, Input, Audio and more going on, but the above is a simplification to show the full workings.
The following sections will break this down further for reference.
Breaking down the core loop into a more digestible reference, the actual Game Loop can simply be described as:
Event | Description |
---|---|
Game Run | Prep the launch pad and ready the rocket for launch. |
Initialize | Load configuration, previous saves, Game state and ready variables for any content they require. A lot of projects use XML or JSON to allow configuration of the project outside of the code, so this is a good time to wind things up. |
LoadContent | Whether you are loading synchronously or asynchronously, when this event is received it is time to pull any content required from the disk and into memory. This does not pass it to the graphics card, only readies it for use. If you have different screens that use different content, you might load them separately (as shown in the Game State Management Sample). |
Update | Update your codes and move those things, ready for the next frame on the screen. Run repeatedly until the game ends. |
Draw | Kind of speaks for itself really, get assets from memory and push them to the screen in order. Maybe some shader stuff too. |
UnloadContent | Can be called at any time, but at this point it is before the game closes, to free up memory used by the game from content. |
Exit | Clear the caches, wipe down the counters and ready for the next guest. |
LoadContent and UnloadContent can be called at any time, and usually are when you are switching from screen to screen, level to level, trying to keep ONLY those assets you actually need in memory at the same time. If an asset is not being used, you should consider clearing it out to save those precious memory bytes and cycles for the main event.
Update and Draw are called constantly while your game is running as fast as your machine can run them. By default, MonoGame uses a FixedTimeStep based on 60 FPS (Frames Per Second), doing its best to keep the GameLoop running at a constant speed regardless of device, however, this is NOT GUARANTEED, as if your game takes longer than expected in your Update cycle, the Draw cycle will be late, the reverse is true also.
To allow faster speeds, you will need to alter the timing setup for MonoGame by either setting the TargetElapsedTime to an expected framerate, e.g.
// Target 30 FPS
this.TargetElapsedTime = TimeSpan.FromSeconds(1d / 30d);
Or you can disable FixedTimeStep IsFixedTimeStep = false
, and manage all the timings yourself, although this too comes with its own specific considerations. There is no spoon.
My recommendation is to just use the defaults until you run into an issue (or know specifically what you are doing), as the defaults will handle most situations all by themselves, in my honest opinion (for what it is worth).
In some cases (most when you are dealing with MonoGame projects), you need to override the base implementation of MonoGame events to build your project. Some like Initialize, LoadContent, Update and Draw are well known, while others may not. Here is a full list of the events you can adapt to your needs in your game, all of which are described above:
Event | Description |
---|---|
Initialize | Once the game has loaded, initialize the game. |
LoadContent | Init point to load content for the Game, Scene, Screen or Drawable Component. |
UnloadContent | Exit point to clean up content for the Game, Scene, Screen or Drawable Component. |
Update | Called once per frame to perform updates. |
BeginDraw | Called once per draw call before any drawing has taken place. |
Draw | Called for any game or Drawable components. |
EndDraw | Called once all drawing has taken place. |
OnActivated | Fired when the game receives focus, dependent on the Platform’s determination of focus. |
OnDeactivated | Fired when the game loses focus, dependent on the Platform’s determination of focus. |
OnExiting | Fired as the last call before the game terminates. |
Dispose | .NET call to clean up a class as it is destroyed, including the Game class. |
BeginRun | First point of call (prior to Initialize) on starting the game. |
EndRun | Dying breath for MonoGame before the dotnet process is terminated. |
The sample project linked to this article is available to demonstrate these calls in action.
An underrated feature of the original XNA and now MonoGame Framework are components, essentially these are small implementations of Game Code that you simply pass on to the Framework to handle and run, and they just run (so long as they are registered). Game Components come in two flavours, one building on top of the other:
Let us delve into these a little more, into how their events are handled.
Games require a lot of features and components in order to run, some are responsible for things you see, and others are not, such as Audio Managers, Networking and more.
This is where Game Components shine, allowing you to simply register a “Component” with the framework and then MonoGame takes care of it running in the background, all you need is a reference to it in case you need to disable or unregister it, or if the will takes you, to make changes to it.
As shown in the MarbleMaze sample (currently still just XNA but easily upgradable), the AudioManager demonstrates a simple-to-use system, where a single component controls all audio for the game. No other game code updates or initializes it as it runs in the background of the game as a Component.
Very useful for such scenarios.
Drawable Game components extend the base Game Component architecture by also subscribing to the Content and Draw calls from the main Game library automatically, the biggest benefit here is that they are totally automated and draw in the order they were added to the Games “Components” list.
A great example of these is the Particles2DPipeline sample which builds on top of the excellent 2D Particles sample and demonstrates some complex particle systems which are dynamic due to their use of Components, as shown in the Particles 2D Game class where the effects are loaded based on a set configuration, which can be changed.
Ultimately, these are then activated and disabled to change how the effects are demonstrated with minimal changes to the actual game code.
The XNAGameStudio Archive contains all the previous XNA examples and more, most are still in their XNA format but are easily upgraded to MonoGame by copying out their Game code and assets, a few have already been upgraded to MonoGame already, here.
Although at the time of writing, only the GSM Sample has been updated to MG 3.8.1, it is a work in progress and a lot of effort to maintain.
But in all cases, you can walk the code and see the various patterns that demonstrate the MonoGame events in action.
The sample included with this article demonstrates the events described and shows how often events are fired. It also outputs the events to a CSV file in the Game’s run folder (by default /bin/Debug/net6.0
).
A very theological article to give you more of an understanding of what goes on under the hood of any MonoGame project and some hints if you specifically need to know when a piece of code will be executed and run.
Hopefully, this will aid you on your MonoGame journey.
]]>TL;DR -> Today, cross-platform projects are easy to setup (except UWP), with .NET7, it gets easier. Read the article for the How-to!
Most Game projects when built are solely for a single platform, usually the same as the development machine they are being built upon. Only later does the idea surface to maybe ship to another console, handheld or platform which then highlights issues in the games implementation that can lead to issues.
You CAN just build and ship to one platform, there is no issue with that, but if you are possibly considering more than one, plan ahead!
If you create a MonoGame project for a single platform, you get a single project with a single content project, life is simple and your concerns are light.
Once you consider adding another platform, there are a few considerations to take into account, namely:
Granted the last is a bit of a fringe case but very dramatic when it happens, and for reference, this never happens with dogs as they just want to cuddle your feet or sit in your lap. “Just say’in”.
It can seem a lot to take in, but let us walk through the major points step by step and then walk through generating our project (Click here to skip ahead if you like)
No one likes to write the same code twice, let alone, keep writing it or copying it multiple times across multiple projects, so when you are planning for multi-platform games, you need to identify any and all code which will be the same across all the projects (which is usually about 90% of your code) and then find a pattern that works for you to ensure you write it once, no matter the platform it is running on.
When it comes to sharing code, there are three patterns to consider, each with their pro’s and con’s:
Each of the options are easy to implement and run with, although Linked Files can become harder to manage the larger your project becomes, essentially because when you add a new class file, you have to remember to manually add it to all projects (or write a script to do so).
I Recommended to use a .NET Shared library for now unless you need UWP (Windows 10/11/Xbox Xaml), then use a shared library for now until the .Net8 upgrade.
The recommended approach is to create a .NET class library and add a reference to that project to all your MonoGame projects, which is a quick and easy task. MonoGame even provides you with a project template specifically for MonoGame shared code.
*Note if you look at the references in the MonoGame Class Library project you will see the “DesktopGL” MonoGame reference. You can ignore this as it is just a shim, this is ignored when the project is built and the proper MonoGame DLL from your Game project is used instead. This is some magical wizardry performed by DotNEt during the build, much like the older MonoGame.Portable projects used to do.
To add a class library to an existing MonoGame project (assuming you have not created a Solution file already) is as follows:
Assuming you have a folder containing your MonoGame Project, e.g. MyGame -> MyGame.DesktopGL, while in the MyGame folder.
dotnet new sln -n MyGame
dotnet new mglib -o MyGame.SharedCode
dotnet sln MyGame.sln add MyGame.DesktopGL\MyGame.DesktopGL.csproj
dotnet sln MyGame.sln add MyGame.SharedCode\MyGame.SharedCode.csproj
dotnet add MyGame.DesktopGL\MyGame.DesktopGL.csproj reference MyGame.SharedCode\MyGame.SharedCode.csproj
Which creates a new solution file, adds your existing MonoGame project to it, generates a new MonoGame class library and adds that to both the solution and as a reference to your existing MonoGame project.
Alternatively, in Visual studio, simply:
Repeat the last step for any additional MonoGame platform projects (just adding the reference) and all your projects now share the same codebase. Any code you add to the Class Library project are instantly accessible to all platforms and will be compiled with all platforms (and if any issues show up, it will instantly tell you).
P.S. Did you know you are NOT limited to just one class library, like content projects, you can have as many as you like if you prefer to also break up your code.
First make sure the folder containing the linked code is located relative to the MonoGame projects you are sharing it with (usually all at the same level, e.g. MG Project 1, MG Project 2, SHared Code). This helps maintain the consistency of the code and links for your project.
Due to the lack of support for automatically adding linked files using the DotNet CLI tool, it is recommended to use Visual Studio (any edition, including community) to create and manage the links, to do this in Visual Studio you can right-click in the Solution Explorer to add an existing file (Add existing item), then instead of just clicking the “Add” button (which will copy the file), click the drop-down on the button and select “Add as Link” as shown below:
You will need to repeat this process for every class file you add to the shared folder and make sure to add it to each project. Usually, if you choose this path, I recommend building a batch script to do it for you. You will also have to perform this task when removing files.
To link files without Visual Studio, we need a PowerShell script to edit the csproj project definition and add the required XML for the linked file, the script I use is as follows:
# usage: .\Add-LinkedFile.ps1 -SourceProjectPath MyProject.csproj -LinkedFilePath ../Shared/mysharedclass.cs -LinkedFileName Shared/mysharedclass.cs -BuildAction Compile
# output: <ItemGroup><Compile Include="../Shared/mysharedclass.cs"><Link>Shared/mysharedclass.cs</Link></Compile></ItemGroup>
param(
[Parameter(Mandatory=$true)][string] $SourceProjectPath,
[Parameter(Mandatory=$true)][string] $LinkedFilePath,
[Parameter(Mandatory=$true)][string] $LinkedFileName,
[Parameter(Mandatory=$true)][string] $BuildAction
)
$sourceProject = [xml](Get-Content $SourceProjectPath)
$linkedFileRelativePath = (Resolve-Path -Path $LinkedFilePath -Relative)
$itemGroup = $sourceProject.CreateElement('ItemGroup')
$sourceProject.Project.AppendChild($itemGroup)
$linkedFile = $sourceProject.CreateElement($BuildAction)
$linkedFile.SetAttribute('Include', $linkedFileRelativePath)
$itemGroup.AppendChild($linkedFile)
$link = $sourceProject.CreateElement('Link')
$link.InnerText = $LinkedFileName
$linkedFile.AppendChild($link)
$sourceProject.Save((Resolve-Path "$SourceProjectPath").Path)
You then simply run the script “Within the folder of the project you want to update” to update the Platforms project to reference the linked file, for example:
From the folder “MyAwesomeGame/Platforms/MyAwesomeGame.Android” (assuming you save the above script in the MyAwesomeGame folder), and your shared code is in a folder called “Shared”, also within the MyAwesomeGame folder:
../../Add-LinkedFile.ps1 -SourceProjectPath MyAwesomeGame.Android.csproj -LinkedFilePath ../../Shared/mysharedclass.cs -LinkedFileName Shared/mysharedclass.cs -BuildAction Compile
Which, if you then edit the “MyAwesomeGame.Android.csproj” file, you will see a new addition to the project, as follows:
<ItemGroup>
<Compile Include="../../Shared/mysharedclass.cs">
<Link>Shared/mysharedclass.cs</Link>
</Compile>
</ItemGroup>
Alternatively, you can simply add the XML yourself, so long as you conform to the XML standards for the csproj specification, for example, this is also acceptable:
<ItemGroup>
<Compile Include="../../Shared/mysharedclass.cs" Link="Shared/mysharedclass.cs" />
</ItemGroup>
In the past when I have linked files this way, I will build up the majority of the code within a single platform and then move that folder out and link the files, saves repeatedly doing this in the beginning.
The shared library approach is much like the Class Library approach, except it uses the MonoGame Shared Project template instead. However, you cannot add a Shared Library to your project as a reference through the DotNet command-line (as it is a Xamarin solution), it is only supported through Visual Studio.
The only advantage today of using a Shared Library instead of a class library is that it also supports UWP (windows 10/11) projects. This limitation should be removed with the upgrade to .NET 8, but at the time of writing, this support is missing from Class Libraries.
If you are not aware of Interfaces in the C# language, they provide an “example” of what a class “should” do for your project, and you can decide later which class that implements the interface that is used. It is a handy way of effectively swapping out which code does what, so long as each version implements the same interface (as the same public properties and methods of the interface).
In short, if I define an interface as follows:
public interface IAchievementService
{
bool IsInitialized { get; }
void Initialize();
void UnlockAchievement(string achievementName);
}
And then define two implementations of the interface, one for Steam and one for Xbox:
// Steam
public class SteamAchievementService : IAchievementService
{
private bool isInitialized = false;
public bool IsInitialized => isInitialized;
public void Initialize()
{
isInitialized = true;
}
public void UnlockAchievement(string achievementName)
{
// Do Steam Unlock Achievement Stuff
}
}
//Xbox
public class XboxAchievementService : IAchievementService
{
private bool isInitialized = false;
public bool IsInitialized => isInitialized;
public void Initialize()
{
isInitialized = true;
}
public void UnlockAchievement(string achievementName)
{
// Do Xbox Unlock Achievement Stuff
}
}
Finally, in your shared game project, all your code simply uses a variable for the “AchievementService”, knowing it has a property for “Initialised” and two methods to Initialise the achievement service or grant an achievement:
public IAchievementService TheAchivementService
Then in each Platform project, they simply declare which service to use, and if you want to “AT RUNTIME” you can even swap them out, the shared code is none the wiser and in fact, DOES NOT CARE which service is it using for achievements, it just uses what it is told.
Pretty neat, eh!
Interfaces allow you to put MORE into your shared code library, letting you put either Platform specific implementations only in a single platform, or have multiple variations of an implementation available to swap out, e.g. A heavy attack or a light attack for instance. There are no limitations (other than all Concrete implementations MUST implement all the definitions in the interface).
Having shared code makes it far easier to define a specific platform’s own implementation, as it will only be contained within the platform project (as if you were building a single platform game), this includes the references and dependencies for that platform, which do not interfere with other platforms. It also means you do not need to have pre-compiler definitions scatted through your code to #IFDEF this or this. (which can become a real nightmare).
The short of the long in this case, is this:
It is possible to have a shared content project that is used by all projects. This is not to say it is the SAME project, but rather, at compile time, MonoGame will build the project for just that platform as a separate thing.
Originally in XNA and earlier versions of MonoGame this was possible through a separate Content Project, but with the move to DotNet and .NET6, it is a little different as we need to manually create it.
The recommendation from the MonoGame team is to use separate Content Projects for each platform, to ensure compatibility and that logos and icons specific to a platform DO NOT get mixed up. But personally, I believe there is a halfway house, where “some” content can be shared and platform specific content can still live only in the specific projects.
To create a “content project” in the new DotNet land, you can either:
Most developers do not realise that you are NOT limited to a single content project, you can in fact have as many as you like, or alternatively, maintain separate project definitions (csproj) which are identical except they use a different content project for each.
Why would do this? Because if your project ships on multiple devices that support different resolutions, like Xbox and Mobile, the source content (albeit similar) will have different requirements, e.g. High quality 4K textures on Xbox and 1k or low-res textures on mobile (put a 4k texture on some mobiles and they will just die).
There is no hard and fast rule about which approach to take, and in fact, some developers ship BOTH types of content in a single project if it is small enough, but remember, the size of the content will affect the size of the final output, and it makes sense to keep downloads as small as possible for any platform.
The choice is yours:
Following on from the above, what follows are the set of commands that will generate a cross-platform project with the following setup:
Script as follows:
dotnet new sln -n MyAwesomeGame
md Shared
md Platforms
dotnet new mglib -o Shared\MyAwesomeGame.Shared
dotnet new mgandroid -o Platforms\MyAwesomeGame.Android
dotnet new mgdesktopgl -o Platforms\MyAwesomeGame.DesktopGL
dotnet new mgios -o Platforms\MyAwesomeGame.iOS
dotnet new mgwindowsdx -o Platforms\MyAwesomeGame.WindowsDX
dotnet sln MyAwesomeGame.sln add Shared\MyAwesomeGame.Shared\MyAwesomeGame.Shared.csproj
dotnet sln MyAwesomeGame.sln add Platforms\MyAwesomeGame.Android --solution-folder Platforms
dotnet sln MyAwesomeGame.sln add Platforms\MyAwesomeGame.DesktopGL --solution-folder Platforms
dotnet sln MyAwesomeGame.sln add Platforms\MyAwesomeGame.iOS --solution-folder Platforms
dotnet sln MyAwesomeGame.sln add Platforms\MyAwesomeGame.WindowsDX --solution-folder Platforms
dotnet add Platforms\MyAwesomeGame.Android\MyAwesomeGame.Android.csproj reference Shared\MyAwesomeGame.Shared
dotnet add Platforms\MyAwesomeGame.DesktopGL\MyAwesomeGame.DesktopGL.csproj reference Shared\MyAwesomeGame.Shared
dotnet add Platforms\MyAwesomeGame.iOS\MyAwesomeGame.iOS.csproj reference Shared\MyAwesomeGame.Shared
dotnet add Platforms\MyAwesomeGame.WindowsDX\MyAwesomeGame.WindowsDX.csproj reference Shared\MyAwesomeGame.Shared
Once complete, you should have a solution which looks like this:
The only final changes I would make would be to either:
Depends if you also want a common Game class or not to begin your game, it is up to you and there is no “wrong” answer.
Additionally, you can delete the “Content” folder from the class library as it is extremely unlikely you will use it.
Whether you build your project for a single platform, or consider the options of maximising your reach by adding more platforms to deliver to is a very important decision. It does not come without a small amount of additional risk, but the rewards mean MORE users playing your game on more platforms.
In most cases, everything will “just work”, that is the beauty of MonoGame. On occasion you will hit an issue with a dependency or service that is particular to a specific platform, which you then need to address in isolation, ideally not affecting other platforms, it is a fine balancing act.
But with shared code, your life is made that much simpler as you get to update all platforms at once with central changes, taking feedback from multiple players with different needs to wholly make your game better (or drive you to insanity with their endless demands).
Whichever path you take, I wish you well on delivering on your dreams.
]]>TL;DR -> MonoGame rocks and continues to be one of the best open-source frameworks for building games including with lightweight editors like VSCode, on ANY platform.
Things are busy and heating up in the Open-Source development space and the MonoGame Team announced their plans to step up to the recent demand.
And as was pointed out to me on the new MonoGame Content Request board that I helped to setup, a fair few developers are asking how to get started. The Videos I did a short while back are still good, but do need modernizing for the latest release and beyond.
A Guide for Setting up VSCode for MonoGame on Windows/Linux/Mac
Kicking things off, right at the beginning, here is a guide to getting started with VSCode for MonoGame. And thanks to VSCode, FINALLY, the instructions are the same for ALL PLATFORMS.
To keep things clean, all the instructions here are run from a clean machine, however, if you already have some parts installed, they will be automatically updated, thankfully.
Simply visit code.visualstudio.com and get downloading Visual Studio Code, the drop down button on the left should auto-detect your operating system, or you can click the down-arrow to show the list of platforms to choose from.
One Editor for multiple platforms, including the Web (as in VSCode for the Web on GitHub), however, as far as I know, the web version does not support .NET building at this time.
Now, MonoGame as of 3.8.1.303, is a .NET Runtime framework, this vastly simplifies its installation and use because all aspects are now unified and everything you need to build your games is all built into the .NET SDK.
If you open up VSCode after its installation, you should be presented with the above screen, welcoming you to a new world of light, development and if the mood takes you, fun.
But as it stands right now, you only have a fancy text editor (granted a very powerful text editor), where VSCode really comes into its own is with “Extensions”, for which there are extensions for just about any programming language going (within reason, there is no assembly editor, yet). Extensions add things like:
For MonoGame we need two things, the C# Dev Kit (which includes the C# language and some other tools) and the .NET SDK.
Starting off, click the {} in the left-hand toolbar and then search for “dev kit” which should then result in a whole list of extensions and right at the top (hopefully) you should find the “C# Dev Kit” published by Microsoft, as shown below:
Click on the blue “Install” icon (as indicated in the image) and off it goes.
Some extensions require you to “reload” VSCode after installing/uninstalling, if it does, the Install button will finish with a “Reload” button, click it and VSCode will restart right back where you were as if nothing changed.
The .NET SDK (if you are not familiar with it) is simply the latest generation of the .NET Framework SDK, as ever it is backwards compatible and for MonoGame we need at least the .NET 6 SDK (until it upgrades to .NET8).
Thankfully, the .NET SDK is backwards compatible with previous versions, so installing the current version – at the time of writing –, the .NET 7 SDK, will still enable you to develop with MonoGame.
With the “C# Dev Kit” installed, we get a bunch of new commands to use in Visual Studio Code, which can be accessed by pressing:
Platform | Key Binding |
---|---|
Windows | F1 |
Windows and Linux | Control + Shift + P |
Mac | Command + Shift + P |
This will open up a bar at the top of the screen with a bunch of commands. It also includes a handy search feature (because there is a command for almost anything), so if you type “.NET”, you should see the following:
If you do not already have a .NET SDK installed, you will be greeted with a nice prompt / warning Window, as shown below:
Which is simply informing you that you have something else to install to begin, this is not a VSCode extension and like .NET Framework installers previously, is not something you install in VSCode, but into your system instead. So, click on the “Get the SDK” link (which will open a browser) and you can then download and run the latest .NET SDK installer from there.
Click Install and follow the instructions (if any) to complete the setup to proceed.
Ok, so you have all the prerequisites installed, what now? Well, we start using MonoGame!
Let us begin by starting up VSCode again, we get the familiar screen, but not much has changed, we are simply prepared and ready to actually begin using MonoGame, we have the tools so let us begin!
Welcome to the Terminal screen, you will be spending quite a bit of time here. I have often heard that some developers miss a GUI at this point and it is a fair quip, however, The is great because it is the same process on ANY platform, and you may find it more comforting, with the same tools, the same commands and likely the same coffee!
Start up the Terminal Window using the above menu and you will be presented with a Terminal/command-line window ready to process your commands, as shown below:
From here, we are simply following the steps laid out in my previous article and in the MonoGame “Getting Started” guides, first type the following to install the MonoGame DotNet project templates:
dotnet new --install MonoGame.Templates.CSharp
And you should see something like the results below:
I did note in testing that the .NET 7 SDK no longer needs the
--
(double dashes) before the “Install” argument, but in a good and true backwards-compatible fashion, it still works. But I leave it in all instructions as it is still needed if you are using the .NET 6 SDK.
With MonoGame installed, we can get on with starting our new project!
From now on, when you need a new MonoGame project, this is where your journey begins, your machine is setup (unless you paved it recently or are borrowing someone else’s) and everything is ready. The steps for creating any new DotNet project are always the same:
I personally these days find it easier to stay within VSCode and do it all from there, but the choice is yours.
To perform this in VSCode, using the “Terminal” window you used to install MonoGame, check the directory you are in (as indicated by the left hand side of the cursor which always shows you where you are, handy eh?), and then navigate to where you want your game created and make a new folder, e.g. :
cd C:\Development
mkdir MyGame
cd MyGame
or you could swap out to your explorer/finder and do it, the result is the same.
Now, using the Terminal window IN the folder (if you used a GUI, you still need to navigate there in the Terminal, hence why I say it is easier to just use the Terminal), you then simply use the following command:
dotnet new mgdesktopgl -o MyGame
Which is comprised of:
Tool | Description |
---|---|
dotnet | the DotNet command tool |
new | I want to make a new project, please sir |
mgdesktopgl | The MonoGame project template to use, from the list of templates installed, see the previous image above, shown in the red box |
-o MyGame | Create a project called MyGame in a folder called MyGame |
There, you now have a new MonoGame Project built and ready to use, including the MGCB content tool, it is no longer a separate install, it is built into the project itself. (I can imagine how many hours that will save me managing it separately).
All that is left is to “Open the Folder” where you created the project (the folder WITH the .csproj file in it, not its parent folder) in VSCode and off you go, editing in VSCode on the fly!
Like Visual Studio, all the building of a MonoGame project, no matter which platform it is, is performed by the MS Build tools, but again, thanks to the latest .NET frameworks, this is all managed by a simple command:
dotnet build
This compiles and checks your code can build, it also restores any tools, downloads any dependencies for the project, since it was last run; all in one. What you get, assuming it is successful, will be a folder structure like this:
MyGame\bin\Debug\net6.0
Comprising of:
Path Segment | Description |
---|---|
MyGame | your game code folder |
bin | the Binary output folder |
debug | the build mode your project was compiled under (usually debug or release depending on how you built it) |
net6.0 | the framework the project was compiled with (MonoGame is currently using .NET 6) |
In the final folder, you will find an executable of the same name as your project (MyGame.exe
in my case on Windows, it will be different for other platforms) which you can run.
To create a FINAL build (because you do not want to ship debug code to customers, do you?) you simply follow the steps in the MonoGame documentation and produce a “published” build, as follows:
dotnet publish -c Release -r win-x64 /p:PublishReadyToRun=false /p:TieredCompilation=false --self-contained
Which basically commands MSBuild to generate a Release build, ready to publish and makes the output the best it can be.
If you want to read more about the “dotnet publish” command or other dotnet commands, check the official Microsoft documentation.
Done, dusted, all there is to say, is it not?
Now, if you have read any of the documentation regarding MonoGame, you will know there is a tool for Managing / Building content separate from your project to help with compiling your project (as well as a whole bunch of other platform optimization and wonder). You do not have to use it if you do not want to, but I always recommend using it because I always liked it.
Previously, you had to install the tool separately, find a way to launch it beyond the command-line which “mostly” worked for Visual Studio, but thanks to VSCode Extensions, there are already two extensions for adding a “right-click” launch ability for the MGCB tool directly in VSCode (sadly only one works, but here is hoping the dev fixes it).
To install the extension, simply go to the Extensions view (as before) and search for MonoGame and you should see a list like the one shown below:
From my testing, only the second item (the one with the STARS) actually works, so feel free to install it if you wish. Once complete you will be able to open the MGCB tool by right clicking the “.mgcb” file located in the “Content” folder.
Alternatively, you can always run the following in the Terminal window for your project:
mgcb-editor Content/Content.mgcb
VSCode also has the ability to debug .NET projects in the same way that Visual Studio can, just in a slightly different way, however, it is not on by default, you have to enable it.
To setup debugging, open the Command Palette using
Platform | Key Binding |
---|---|
Windows | F1 |
Windows and Linux | Control + Shift + P |
Mac | Command + Shift + P |
Once open, type the search criteria “Generate” and locate the .NET Generate Assets for Build and Debug command and click it (or hit return/enter) as shown in the image below:
This will generate the necessary “Launch Configurations” in VSCode and store them in a new “.vscode” folder in your project (do not ignore this folder if you are customizing debug/launch options in VSCode)
With that setup, you can begin to run your project in Debug mode as follows:
C# <your game name>
” using the drop down button. (next to the cog icon).Now your game will run as normal with VSCode attached and you can Debug/BreakPoint/Catch Exceptions like the best of them.
Check the official VSCode documentation for more information for debugging with VSCode.
We have come down a long road, one that is far shorter to actually do than it was to write or even read this post. Hopefully it will help the newcomers and prepare everyone for what is to come!
Now I’m going back to the drawing board to work on more stuff, laters!!
]]>P.S.
If you have an idea or a question that you believe would require a tutorial, visit the MonoGame tutorial series suggestions project board, and if you do not see your idea listed, then Create your own request here. All the creator community is watching and looking for your requests.
TL;DR -> MonoGame rocks and continues to be one of the best open-source frameworks for building games.
Making games is hard enough (or the most fun you will ever have) without the worry of the Framework or Engine you use either being cancelled or changing the terms of its use in the future, which can provide uncertainty for the years of effort you put into making your dream a reality that everyone can play.
Whether it is an open-source project or a paid development solution, it is hard to choose the right option that will ensure your investments are secure.
Thankfully there are quite a few engines and frameworks available today that do fit that bill:
The Framework of discussion today is MonoGame, which despite all talk to the contrary is very much alive and well.
MonoGame is a game development framework born from an implementation pioneered by Microsoft around 2004 called XNA (no acronym), it brought the dream of building games for Windows and Xbox (and later phones) using C#, it was a breath of fresh air and made programming games a breeze. Sadly, XNA was discontinued by Microsoft but was reborn as MonoGame.
XNA had made such an impact, that the community was just not willing to let it go, and today it flourishes.
Now, MonoGame is a Framework (not an engine) and its power lies in the original XNA implementation that abstracts (hides) the underlying complexity for making code, graphics, sound and the myriad of other technologies required to run a game on a multitude of platforms. Which means that you code your Game once and then you can run it on any of the platforms that MonoGame has an implementation on (much like how Unity / Unreal and GoDot work today, but arguably XNA did it first for C#).
In basic terms, you write your game using MonoGame’s implementation and it then generates a project to deploy on your target platform.
Contrary to some writers beliefs, Like the swan on the lake, there may not seem much movement on the water, but underneath, there is furious paddling going on to keep it moving.
P.S. I tried to find an appropriate GIF to demonstrate this and failed, for that I am sorry. The best I could find was a worried cat learning to swim :S.
One thing that keeps coming up from time to time, and sometimes confused with “MonoGame is Dead”, is that the XNA implementation used in MonoGame (the language and structure) has not changed for years, which is very true (much like a car has 4 wheels, doors and an engine), the “front end” of MonoGame, for the most part, has remained the same since XNA 4.0 (the last iteration of XNA). This is not to say nothing has changed, but rather it has remained stable and secure, what you wrote 10 years ago, will still run today (with the obvious caveats for minor critical changes).
Where MonoGame evolves is all the hard work and underpinnings to MAKE your game run on all the other platforms, handling the complexities for how Audio works, how graphics are drawn to the screen, and the vast technologies involved for different devices on a multitude of platforms. This is no mean feat as vendors are constantly updating, changing and maintaining their platforms and “just expect” developers to keep up or be left out in the cold.
So as MonoGame evolves YOU DO NOT NEED TO CHANGE YOUR GAME, what you have written and the MonoGame features you use will continue to work, it will just work differently on the target platform when it is eventually compiled (built).
In fact, MANY of the original XNA Game Studio 4.0 samples and code STILL WORK today, that is how strong MonoGame is! Check it out for yourself from the XNA Game Studio Archive
Just be aware, that XNA 1, 2 and 3 samples WILL need updating to XNA4/MonoGame to actually work (it is not magic after all).
This is not to say you never need any vendor platform code at all in your project (MonoGame cannot build for EVERY feature), for instance, if you want to use Apple’s notification system and Google’s notification system in your project, you will still need to handle that, but thankfully C# makes that easier with platform dependent code (again, much like how other engines/frameworks do).
The MonoGame community is VAST, and as previously stated, most XNA/MonoGame books that have been written since 2010 will still continue to be valid and worthwhile (now find me an engine/framework that can say that). About the only thing that has changed (due to the underlying features mentioned above) has been how you install MonoGame and how you build with it, but more on that later.
Amongst MonoGames community weaponry it includes:
And so much more, since any XNA content is also valid (from the coding side) and about the only real difference from the XNA days is how the content pipeline is used (which is totally optional).
The MonoGame toolchain has gone through many iterations since its first release, always moving with the times and updating to modern technology, from Separate plug-ins to Visual Studio extensions and in the past year, updated to use the .NET Runtime and toolset. with each iteration, the API remains the same, so existing projects are not broken, but under the hood, things keep advancing.
And MonoGame does not stop there, still planning ahead and looking further into the future as technology evolves, talk is already well underway for a .NET 8 upgrade which brings even more cross-platform capabilities and (if reports are true) faster compilation and execution (your game goes faster). There is even talk that with the next .NET upgrade, tools such as BRUTE (which provides C++ compilation to some AOT native platforms, such as Switch) could even be retired. (depending on the capabilities published by Microsoft)
For today, MonoGame 3.8.1 (and beyond) are all based on the .Net Runtime and getting started is as simple as:
dotnet new --install MonoGame.Templates.CSharp
from the command-line.dotnet new mgdesktopgl -o MyGame
dotnet build
to create a build for your game and then run it.For more details on getting started, check out the documentation
One of the biggest confusions and usually the reason commentor’s state that MonoGame does not change is because it does not change from the outside. The MonoGame team persevere to ensure the API for MonoGame retains its XNA origins. It does not change for a reason, and thanks to that, games written yesterday will continue to work today (and beyond).
MonoGame does however encourage and support community members building ON TOP OF MonoGame to further extend it and then helps to then promote those extensions for other MonoGame users. These can range from simple addons, to full-extending frameworks, even to engines built on MonoGame. These Engines make full use of MonoGames platform capabilities to ship a game and focus on the parts they need to deliver an engine.
Some of the biggest out there include:
It is funny that most engines feel the need to build up an asset store for content, but with MonoGame, GitHub is its asset store for all the things you need to extend your project. As for content, that is accessible from almost anywhere and most of it is either natively supported by MonoGame, or “there is an Extension for that”.
One of the interesting capabilities that came from XNA originally, was the idea that you could have a “Content” (asset library) project, build your assets separately to your code and live happily. In fact, many competitors to XNA scoffed at this and said it shouldn’t have bothered, yet today a lot of developers complain about how long builds take in engines because of content and look for ways to separate it.
MonoGame took the Content Project base and built its own MonoGame Build Tool which focuses on content and adds extensibility (much like XNA) so you can write custom importers and processors to handle different types of content. You could even write your own custom level design configuration and have the entire pipeline process and generate your levels for you. I am a big content pipeline fan and have written and shared many extensions for the Build Pipeline in the past.
However, one of the biggest myths (or debates depending on where your sticks land), is that this was the ONLY way to have content in MonoGame, which is simply NOT TRUE. If you want, you can ignore the whole MonoGame Content Builder and just load content/assets and such manually in code.
MonoGame will never seek to limit you in any way.
So whether you are a content pipeline user:
var myTexture = Content.Load<Texture2D>("ball");
Or a non-content pipeline user:
using (var fileStream = new FileStream("test.png", FileMode.Open))
{
var myTexture = Texture2D.FromStream(GraphicsDevice, fileStream);
}
The path is yours and MonoGame will never dictate or tell you how to build your game.
This post has wrangled on for a while trying to set a few truths alongside the doubters or nay-sayers who call something dead because they don’t see change happening.
Full disclosure, I have wondered the same because it sometimes takes a long time between releases. But I work with the team, who are equally frustrated at times and things are ALWAYS in motion.
Given no one gets paid to build MonoGame and all the supporters work in their spare time, rather generously I might add, it is still amazing how much love MonoGame gets. But to those who still and continue to MonoGame, I salute you!
In times of upheaval, you might want to give it a whirl, load up a tutorial and try building something and it is indeed surprising how easy it is. Even without an editor/GUI to lean on, it is amazing how visceral and close to the metal MonoGame feels. Every pixel that moves, every change that happens is directly under your control and it is surprisingly freeing.
Laters.
(Comments welcome!)
]]>TL;DR -> Jekyll builds can have page collisions in different places if the pages happen to have the same title. Skip ahead to fix!.
WOW, it has been a while since I last posted, which is genuinely very bad for me, not to say I have been lazy or up to no good, quite the opposite, I have a myriad of things on my mind:
Personally, I do not see this as a valid excuse, but I have been slowing down a bit of late due to my progress in time, promises made and broken to myself, but C’est la Vie.
So why the sudden post. Well it all started when.. No it’s too much, let me sum up.
In short, when posting about my new book, one favoured visitor informed me that all the book links on my blog site were no longer working. I was curious, bemused, confused and above all, frustrated.
I had migrated my blog some time back and despite my best efforts and mad “search and replace” skills, I still find the odd link broken. Not put off I checked out (what I thought) the page that had the information, except, the links were all fine…..
Confused, yes I was.
So in an effort to resolve this I tried a few things:
In the end I went back to basics and tried building the site locally, only to find all my test setup no longer worked, I fixed that and built the site.
What I found was something I did not expect:
Only when building locally do these kinds of conflicts how up, the GitHub Pages deploy actions all report SUCCESS!!!.
I did dig deeper much later, the older “pages build and deployment” publishing task does NOT show any errors, however the newer GitHub Actions based deployment DID actually show the same error (if you know where to look)
What is essentially happening is this:
Books.html
which by default deploys to <root>/Books.html
(the file I was editing)2014-09-02-books.markdown
, which when built is written to <root>/books.html
.Jekyll strips the dates from posts when generating the HTML.
So two files ended up writing to the same output file, and basically, the last one to process won (the blog post version) as a Jekyll build will keep writing and just let you know what it did.
Laughably, the only way to prevent the collision was to delete one page or the other, or rename them, that is all there is to it.
INCLUDING THE SUBJECT MATTER AT THE TOP OF THE FILE!!
So once again my blog is a happy place, links work and pages are showing exactly how I expect them to be, phew. Only took an hour of pulling hair out.
I almost find it ironic that one of the things that would have helped the solution was to read my new book, which is due to go on sale very soon (or now if you are reading this in the future).
if you want to learn and understand automation and all the joys that come from letting someone else to the repetitive parts of you job, then I would recommend a read.
And if you attended my talks on Automation recently, you could try all the fun things you can do with automation too.
I hope this helps out any other Jekyll blog users and save you from a certain amount of hair pulling in diagnosing issues on your blog.
Managing automation is hard enough without having to pollute your git history with changes to your actual workflows, in reality, the workflow itself is unlikely to change but there can be subtle updates in what each workflow action needs to do.
As an example, in the Reality Collective we needed to change the versioning strategy for our packages due to upcoming changes in Unity (another nightmare to be sure), the actual flow is not changing but we simply needed some minor updates. All this required (although not in reality) was to update our automation to change the “Preview” tag we previously use to “Pre”. Now, if we had all the code and scripts inside our actual projects, this would have meant changing ALL the packages in all the projects we deploy for this one minor change, however, we had employed “Reusable workflows” in all our automations previously, so we could make the changes in a separate repository rather than pollute our core code projects. (well, at least in theory).
Reusable workflows come in several forms, the most common are the Published “Actions” on the GitHub Actions marketplace, such as the popular “Checkout” action to clone code at the beginning of an automation. Actions are reusable scripts that can consumed in your workflow.
But what if you do not want to publish your workflows to the public marketplace? Well, that is where Reusable Workflow repositories come in, like Actions, they are workflows published to a GitHub repo and can be accessed from any other implemented workflow with a few “security” limitations:
See the GitHub Blog article on the launch of Reusable Workflows
As an example, here are all the reusable workflows the RealityCollective publishes for all its packages to consume:
Reality Collective reusable workflows
By default these can ONLY be consumed by other Reality Collective repositories, to use then in other organistions they must be manually copied and maintained.
To actually share them, they would each have to be converted in to Marketplace Actions, but we have no need at this time to do that.
Consuming these workflows is easy by having the following Job definitions within the actual workflows in each of our packages, thus:
release-Package-only:
name: Release package only, no upversion
uses: realitycollective/reusableworkflows/.github/workflows/tagrelease.yml@v2
with:
build-host: ubuntu-latest
The workflow is called using realitycollective/reusableworkflows/.github/workflows/tagrelease.yml
, specifying the organisation, repository and location of the reusable workflow file, followed by a TAG or Branch to source the pipeline from.
I would strongly advise AGAINST using branches as they can be more trouble than they are worth, see below.
Now, obviously over time these workflows need to be maintained and new versions added/updated/removed but you do not want to be changing all the repositories consuming them constantly, hence why reusable workflows require a branch or tag suffix against the reusable workflow file.
And this brings us to why this article exists today, as the public actions GitHub publish have some subtleties that are not immediately obvious and what follows is a guide to managing your own reusable workflows.
If you wish you can create automation for your workflows to automate the tagging for the reusable pipeline repository.
This ensures that all workflows using the V2 tag will get the latest update automatically. Only if there is a breaking change in parameters or flow, should you then do a Major version update to V3.
This subtlety with rebasing the Major Tag to the new commit is what eluded me in maintaining our workflows recently, so hopefully this will shed some light for you too, if you are using reusable workflows.
As a simple guide for making your own reusable workflows for your own repositories, here are some times to avoid some of the pitfalls I faced when building our first batch:
Note changing any workflows in the repository now will NOT BREAK any use of the workflows, making life more stable.
When you need to make changes and create a new reusable workflow release, plan it, test it and update as follows.
No changes are needed to existing workflows as they will use the new updated V1 commit as the source of their workflows.
If you do not recreate/update the Major release Tag commit, your workflows will not see changes unless you update them to the new minor release.
To use the new version, all workflows needed it will need to be updated.
It has been a “not” fun time working through updating our pipelines recently, mainly because we followed the GitHub examples andd just use the Main branch (@main) which created headaches when we needed to make updates. After a few hours examining what GitHub does to manage its own workflows we built up the above flow (the Major Release re-tagging was the latest).
But now we have a flow for maintaining our workflows better and we can get back to fighting Unity instead.
Hope this helps.
]]>With Azure Devops and GitHub Actions you have the option to either use their hosted agents using the Azure backend, which is fine as you do get a decent amount of time of free resource but eventually you are going to end up paying for the service. Or, you can host your own agents on your own PC or even a dedicated box just for building.
If you have spare kit (does not need to be uber fast) then it makes sense to leave it powered on in the corner and have all your builds/tests run on it while you continue to work. And this isn’t an either/or moment either, if you want you can run some of the smaller automations you use to do code validation or check-in “checks” using hosted agents and then run your intensive work on your own box, it is completely up to you.
As mentioned previously, for both Azure and GitHub automation, Microsoft provide you several images to run your jobs, which you chose is dependent on what the job is doing and the requirements needed to run them, for most script / .NET / NPM tasks Ubuntu (Linux) is fine and it is also the cheapest from an automation standpoint. As soon as you start needing software, such as Unity, then your options become a bit more limited to either Windows or Mac:
Make sure to check the previous articles for more details on using the hosted agents for either Azure Devops or GitHub Actions.
As the title of the article suggests, self-hosting simply means you are using your own hardware, be it physical or virtual (yes, if you have virtual servers hosted elsewhere, you can self-host on them too), you are in control of the setup, operation and runtime for the host. This can greatly simplify the runtime of your host and even your automation pipelines as you can skip the steps needed to setup and install required software because you already know it is installed (because you installed it), rather than having potentially complicated scripts to download and install each requirement on each run.
Fair warning, Unity is still a pain when it comes to automation, ever since they updated to the V3 of the Unity Hub, it has made installing / checking installs of the Unity Editor very temperamental. Personally, I manage the version of Unity that is installed manually now because automating it became too unreliable. Thanks Unity!
The requirements needed to run a self-hosted agent come in two parts, what you need to run your pipelines and what you need in order to run the agent itself.
What you need for building / automating and testing will largely depend on what you are building, for example:
For the automation, you will need scripting runtimes and also the agent software of choice, depending on whether you are using Azure, GitHub or even one of the other popular automation services, like GitLab (however, we are only covering Azure and GitHub in this article):
You may find on your initial forays into automation that there may be other requirements needed depending on your workflows, just be sure to document them as required so that when you install another agent, you have everything recorded.
One thing that some people miss, is that agents are virtual setups in themselves on your build machine and there is NO LIMIT to how many agents you can have installed on a single box. In my experience, I generally install approximately 3 agents on a PC, depending on how much I expect the agents to do and how intensive the operations are:
Ultimately though, you can mix and match again. Have several small agents setup with a Tag/Name to identify them and then have one single agent identified as the BUILD agent for solely doing the big builds, then you can ix and match which are used in your jobs.
Personally, I setup about 10 agents for small script jobs and 1 (or 2) dedicated agents for just builds, this throttles the demand based on how intensive the jobs are likely to be and do not slow the build PC (Usually my main development pc) down too much.
A critical thing for your agents is the identity the agent uses in Azure or GitHub, ideally this should be a dedicated and separate user, this is to ensure there are no mixups for the account running builds. You can use a personal account, but if that person leaves or resets their access tokens, you will have to recreate all your agents!
So create a new user in GitHub (it is free) and Azure (if also using Azure) and give it sufficient rights to access the parts of the project you want automated.
One often missed fact with agents is where to put them on the build host, whether it is on Windows or Mac/Linux, you need to take in to account that the PATH that is build can get quite long! On Windows this can create issues, less so on Linux/mac but still something that has to be considered.
The “Rule of Thumb” I always use is to place the folder for the agent as high as possible (Root ideally) with short folder names, e.g.
All in all, you want to keep the working path the agent uses as short as possible, to give enough path length for your builds, which as stated, can get VERY long in some cases.
Your final path should resemble something like:
Azure DevOps agents are quite robust and do take some practice to setup, the steps are as follows:
Depending on the OS you are installing the agent on:
./svc.sh install
and then ./svc.sh start
Personally I have always installed agents as Services so I do not need to think about them, else you will need to run a separate command each and every time you need to start them (which some people do), just check the Azure docs for more details for your Operating system.
You now have a single agent setup, repeat steps 6 - 10 for additional agents with a different agent folder name.
Setting up a GitHub Actions runners are far simpler that Azure agents, simply because they have less steps and are a lot more streamlined, however, they can still cause confusion.
Depending on the OS you are installing the agent on:
./svc.sh install
and then ./svc.sh start
Personally I have always installed agents as Services so I do not need to think about them, else you will need to run a separate command each and every time you need to start them (which some people do), just check the Azure docs for more details for your Operating system.
You now have a single agent setup, repeat steps 5 - 10 for additional agents with a different agent folder name.
We are now at the end of our series, we have covered all the necessary bits to get your automation pipelines setup using the service(s) you require and you should be well armed for what lies ahead.
I have left personal notes through the articles to guide you in your journey, but here is a shortlist:
For GitHub if you want to use GitHub actions on PUBLIC repositories, then you need to edit the “Default runner group” and enable it (that caused me a week of head scratching as it is not well documented).
Hope this all helps and if you have questions, comment below!
]]>