Space Dust Studios

Developer Blog

Category: Business (page 1 of 2)

Unity vs Unreal: Choosing an engine for Space Dust Racing – Part 3: Source, Community and the Future

UnrealEngine4ContentExample

In case you missed them, you can check out the first two parts of Unity vs Unreal: Choosing an engine for Space Dust Racing at Part 1: Adventures in Unity and Part 2: The Unreal Deal.

Source release

Before I go any further, I have to talk about the release of the source code. In combination with all of the features Unreal Engine 4 offers, I believe having access to the source in such an accessible way is one of the most important, advantageous and exciting breakthroughs – it pretty much benefits everyone involved, even if you don’t use it directly.

Epic’s code gets reviewed and analysed for bugs and functionality. Desired core features are discussed and/or implemented by contributors. It becomes an even more exciting engine from an academic and teaching point of view, especially for game programming courses.

As for us, we massively reduce our dependency risk knowing that we can can streamline our workflow and fix time sinks, as well as much more efficiently debug and fix any issues, be it game, engine or editor side.

It also means we are in a position that we can implement any feature we require, which removes the “the engine can’t do this, so we can’t do that” factor.

And then there is the development community…

Development community

Even though Unity has a source code license, the community in general does not have access and those who do are bound by Non-Disclosure Agreements (NDA). Unfortunately while this is the case, I think they are missing out on the advantages of a more opened source release.

Epic has stated in their legal FAQ that the EULA does not include a NDA and that everyone is able to freely discuss the Unreal Engine. This in itself is huge! But for us it is great as it means we can talk and blog about it and its source code.

“You are permitted to post snippets of Engine Code, up to 30 lines of code in length, online in public forums for the limited purpose of discussing the content of the snippet and not for any other purpose.”
– From the Unreal Engine EULA

The source release means that the development community as a whole is empowered to discuss, find and fix issues amongst themselves and therefore find solutions to almost any problem. Not only that but they can create free or paid middleware with a comprehensive insight into exactly how the engine works and interacts with their software. There is huge potential here.

We expect to see plugins or even modifications to the engine source go up on the marketplace or at least a GitHub somewhere. We will hopefully see the latest advanced techniques from SIGGRAPH and the like,  as well as community improvements to key systems (networking, physics, audio, etc).

The blueprint system helps make it even easier for non-programmers to get involved and create even more advanced cool content. With the experimental Blutility allowing you to add functionality to the editor itself.

Modding

Since there is a low entry point to the editor ($19, assuming they don’t already have a copy), it might be feasible to use the editor to mod / build content for released games. A game could provide a plugin for the editor which would allow the other licensees to build new content for that particular game, using Unreal Engine. The modded content, once released, should then be useable by any end user. If the game EULA allows it, the content could be sold – paying the usual 5% royalty to Epic. I would guess that the Epic team will probably support the idea as they can sell more licences and broaden their development community.

UE4BlackJack

From the Blackjack example. Half way there, just missing one thing..

Future proofing

Although in my opinion, many key UE4 systems are ahead of the competition, they are actively improving the engine as well as adding new and exciting features. Because of the access to the source via GitHub and the fact they are continuously integrating their internal Perforce changes almost directly into the developer GitHub, we all get a sneak peek into what is forthcoming for the engine. Thankfully, they also like to talk about most of them on their Twitch streamYouTube channels and forums. Again, this helps us keep track, so that we can better schedule work on our end to coincide with progress and releases on their side.

When UE4 was first released, Linux client support wasn’t there yet but you could see it was not far off (and later came out with 4.1, with SteamOS support). However, Linux editor support was another matter. Soon enough the community jumped on board and started standing up the editor for Linux.

Even better still, Epic has now released a roadmap in the form of a Trello board for the engine development and direction in general. You can go there and vote on the items most pressing to you and your team.

UE4TrelloBoard

Super exciting screenshot of the Trello board.

Todo

One concern for Unreal Engine I have is the frequency we seem to need to jump in and modify engine code for situations we wouldn’t expect to need to do so. While it is sometimes tolerable for team changes, it becomes a lot less ideal when you want to distribute the changes. I imagine this will become more of an issue when the Marketplace really comes to life.

An example would be that if you have a third party library you need to include into your project or into a plugin, currently you need to manage and copy the DLLs around manually after packaging builds of your game. The engine avoids this (for Ogg, PhysX, etc) by hard-coding its requirements into .Automation.cs files such as WinPlatform.Automation.cs, which has a comment that I totally agree with:

//todo this should all be partially based on UBT manifests and not hard coded

Similarly, if you have additional non-asset files such as help or HTML project files, you need to manually copy them during the deployment stages.

I hope before their store/Marketplace really kicks off that some of these issues are addressed as integrating hundreds of engine hacks or changes during updates will get quite messy.

The good news is we can solve these issues!

todo

Miscellaneous todo comments from the source code.

Knowledge base

Another plus for us using Unreal Engine that I should mention is the number of experienced developers out there. There are plenty of engine systems which haven’t changed a great deal and have been used by many people over the years in UDK/UE3 and prior iterations – systems such as the networking/replication, basic material nodes, Cascade, Matinee, etc. This all ends up meaning it will be easier to hire or consult with people that have knowledge and experience with Unreal technologies.

Here in Australia, I know the team up at 2K Australia are using Unreal Technology for Borderlands: The Pre-Sequel! and have done so with many of their other games over the years, with a group of former staff breaking off and starting Uppercut Games – also using Unreal. I’ve seen a bunch of great tutorials from Pub Games for both UDK and now UE4. Recently I came across a couple of studios with some roots in the AIE Incubator Program such as Dancing Dinosaur Games, Daybreak Interactive and Wild Grass Games. There are a lot more game studios here doing unreal work, such as Canvas Interactive, Commotion Games, Epiphany Games, Reach Game Studios, Mere Mortal Games, Organic Humans and I’m sure there is a ton I have missed.

In addition to all of those, there are also non-gaming groups using Unreal Engine 4, such as this group here in Melbourne working on sensory therapy for dementia patients (Opaque Multimedia).

That royalty

Assuming that a game does well, in most cases Epic’s 5% cut will mean that long term Unity with a bunch of plugins would still generally be the cheaper option. But really, under the same assumption you will be laughing either way.

When starting a project, it is worth doing the research and forecasting to work out where you expect to be for your project. From the start Epic has advertised and welcomed custom licensing terms to reduce or remove the royalty, if that better suits your project.

“If you require terms that reduce or eliminate royalty for an upfront fee, or if you need custom legal terms or dedicated Epic support to help your team reduce risk or achieve specific goals, we’re here to help.”
– From the Unreal Engine FAQ

In the end

Recently on Develop I read that Unreal Engine 4 made spot #2 and Unity 5 made spot #3 in the Top Tech list. This goes to show that in the end they are both great engines, each offering its own different advantages and trade-offs.

As for us, we found that for Space Dust Racing – Unreal Engine 4 aligned just that bit better for the direction we wanted to go and decided we will give it a run for its money.

I hope you enjoyed this short series into Unity and Unreal Engine! Are you using Unity or Unreal Engine? If so, how are you finding the tech so far for your needs?

Unity vs Unreal: Choosing an engine for Space Dust Racing – Part 2: The Unreal Deal

UnityVsUnrealFeatureImage2

In case you missed it, you can check out the first part of Unity vs Unreal: Choosing an engine for Space Dust Racing at Part 1: Adventures in Unity.


Unreal GDC

Three of us made it over to GDC this year and after having spent the previous few days participating in the speed dating of game development (also known as Game Connection America), we were leisurely wandering around the GDC expo halls.

We had a chat to the guys at the Unity booth, showed them one of our playable prototypes, then went and said hello to a friend in the Qualcomm booth. Next we stared for a while at a demonstration Allegorithmic had on their Substance workflow, followed by quick laugh at Goat Simulator. Soon after we spotted an odd looking sign on the back of some unknown booth which basically said:

Get Unreal. Full engine and source. $19/MO +5%

After a few exchanges of WTH, we walked around to the front. Sure enough it was the Epic Games’ Unreal Engine 4 booth with a ton of people gathered around, and as many staff members eager to answer the onslaught of questions. After a few more exchanges of WTH with the staff, we joined the queue and had a look at the presentation they had going on in the back. Yep, looked legit. Fast forward a couple of hours to the hotel room, where we had signed up and downloaded a copy to start checking out.

GDC

A collection of snapshots taken while in San Francisco attending GameConnection and GDC 2014.

Rendering

For us, when compared to Unity, probably one of the biggest assets Unreal brings to the table is its rendering technology (or networking or source – it’s close). It features modern physically-based shading, full-scene reflections, TXAA, integrated GPU particles, an efficient and easy to use terrain system (including foliage and spline tools), as well as a host of other desired features. Most features are easy to use, efficient and importantly – well integrated.

A good example would be the environment reflection captures that ‘just’ involve placing primitives (sphere or box) into the scene, they are then captured and at runtime are selected, (approximately) re-projected and blended together in a similar screen-coverage-determines-cost way as the rest of the deferred rendering pipeline. In addition they are also blended with dynamic screen space reflections and use the lightmass data to reduce leaking. All in all a pleasurable system to work with.

Another example might be the particle system. The GPU particles are embedded within the standard particle system and support most of the standard particle features, but also feature screen space collisions, vector fields and plentiful numbers of particles. They have also made a good effort to do complexity-decoupled particle lighting.

Materials

Underlying most of the graphical content is the material system. It is a visual scripting/node graph based interface that covers most of your needs from static and skeletal mesh materials to particles and post processing effects. In order to keep things clean, you can create material functions to group node logic and then expose it in a simpler form. You can also write custom material expression code snippets if you need to use advanced shader features or just want to simplify your node graph.

UE4Materials

The material and material function editors.

The completed material graphs are first compiled down to HLSL or GLSL, and then get included by template shaders which basically pick and choose which part of the material code should be included for each type of shader required. It is then fed to the device to be compiled further, resulting in fairly efficient shaders. They have included an option within the material editor to view the generated HLSL code.

If you need to do something more advanced, such as hand optimize, use compute shaders or supply constant arrays, it gets a little less ideal as you’ll need to jump in and start modifying the engine code – however that option is there (first world problems, eh?).

Renderer WIP

Other parts of the renderer can be a bit lacking, missing or just generally incomplete. A single reflection capture is used for each translucency object and it picks it by a simple distance-to-center heuristic, without taking the influence radius into account, which can easily end up selecting non-ideal cubemaps.

In another case we had set out to test the foliage instancing with a few boulder meshes. We followed the lightmap requirements but couldn’t get the lighting to match – only to find that the UInstancedStaticMeshComponent::GetStaticLightingInfo function was empty.

UE4Buffers

Unreal Engine 4 visualizing a bunch of the rendering buffers.

Networking

Another huge plus for Unreal over Unity to us, is its networking and replication solution. Its overall architecture has been proven over many iterations of the engine and with Epic Games’ Fortnite as well as many other upcoming games utilizing the current iteration, it should be just as solid.

Right out of the box they support dedicated authoritative servers, which most commercial online games require. Even better, they can be compiled to run under Linux. If you need to host a bunch of game instances on a server farm back-end you will see that Windows boxes are almost always much more expensive than Linux boxes using the same hardware. Microsoft’s Azure instances are about 50% more expensive using Windows than using Linux, whereas Amazon EC2 instances using Windows are almost double the price of Linux instances.

Another nicety is that they have made it fairly easy to run dedicated servers and test multiple users on the one machine, all from within the editor.

MultiplayerShooter

Who needs friends? Four way multi-player on the one desktop.

Networking under Unity

Under Unity, from what we could tell, most people ignore the builtin solution (for a bunch of reasons) and instead go for a third party option such as Photon or uLink, but there is a whole spreadsheet of alternatives that the community has put together for comparison. If you want a C++ server, you will be looking at writing your own plugin for Unity and the back-end server yourself.

The solution we were leaning towards was not really ideal. It did support dedicated authoritative servers where we could run our fundamental physics and gameplay simulation on the server, but as it was written in C# this meant we needed to use Mono under Linux and we had been warned that we would see noticeably reduced performance compared to Windows. At the time, gauging by the forums this approach was also mostly untested on Linux. Because of these reasons, as well as the upfront/ongoing costs involved and a bit of misalignment with what we really needed, we budgeted for writing our own solution.

At that stage we also re-evaluated UDK/UE3 because of it’s networking framework, however due to cost and engine limitations we decided against it. Now that UE4 is out, it solves a lot of those issues for us.

Networking resources

If you want to learn more about the basics of Unreal’s networking and replication under Unreal, they have a great long existing UE3 documentation page, as well as a smaller UE4 page and a Networking Tutorials playlist that uses their Blueprints system (though watching those videos, I was a little concerned that the server side of the blueprint might ship with the client, which is far from ideal. I am not sure that is the case however).

Blueprints

Blueprints would have to be one of the better, well integrated, visual scripting languages I have used. It is fairly clean and easy to use but most of all it would have to be due to their runtime visual debugger. Debugging a visual language without one can be painful. It is also great to see that they are continuing to add features to make the workflow that much better. For anyone struggling to learn C# under Unity – Blueprints could potentially offer the answer.

I have heard that they are roughly a similar cost to UnrealScript at about ~10x slower than writing it in C++ but used correctly that might be mostly mitigated. However, it does make me wonder why they aren’t compiled down to C++ and then ran through an optimizing compiler, in a similar way to the material networks.

Blueprints

If only the links were depicted as strands of spaghetti, we could redefine spaghetti code.

Sample projects

Worth a mention would have to be the amount of high quality, varied, meticulously maintained, example projects that have been coming out of the Marketplace. While learning the engine, I would often load up and refer to the Shooter Game, Strategy Game or Content Examples but I also spent a good amount of time looking over their Elemental Demo and Effects Cave, as well as most of the other projects. Although we aren’t using the vehicle physics released in UE4.2, the Vehicle Game is another top example.

They are all definitely worth a look and you can find supporting documentation on their site.

The rest?

Most of the other core built-in systems such as NavMesh, audio, streaming and the animation tools for Unreal look solid (if not a little FPS/TPS orientated). It also includes CSG tools which are handy for prototyping/blocking out levels.

The Windows XInput implementation is currently missing vibration support, otherwise it works well. They have included a nice plugin framework for adding additional input modules.

The experimental behaviour trees look good. The cinematics tool, Matinee, is cool but is looking to be superseded eventually by a newer system called Sequencer. They are also in the process of improving their UI systems by adding a WYSIWYG system called UMG (Unreal Motion Graphics). It will be built on top of Slate which is used most often for game UI but is also what the Editor itself uses.

I had better leave it here for this post but will cover the rest of the rest next time, including some of the key considerations.

Interview: “Veteran Developers Gone Rogue” – Team Space Dust

Xsolla Blog

Recently our friends at Xsolla interviewed us about our studio background, company vision, and where we’re headed for the future. Full interview with Michael below.


“Our generation grew up with couch multiplayer. Crash Team Racing, Mario Kart, Micro Machines and Mashed were some of our favourites,” – says Michael Davies, Lead Gameplay Developer of Space Dust Studios.

Conveying a very relatable message, Michael shows that inspiration can come from many different forms, even sprouting from a nostalgic aspect that was once part of our lives.

Lead Gameplay Developer, Michael Davies is 2nd from the left

Lead Gameplay Developer, Michael Davies is 2nd from the left

We at Xsolla had the chance to recently interview Space Dust Studios Lead Gameplay Developer, Michael Davies, and see what he and his team have been excitedly producing including coverage of their newest unreleased game, Space Dust Racing. Below we’ll move right into the interview question and answer session with Michael.

– What is your gaming background and how did you come to release a game?

We’re a team of five senior developers based in Melbourne Australia, who have worked together on and off over the last decade at a mix of AAA studios, most recently Visceral Games (Electronic Arts). The formation of Space Dust Studios was a mix of luck, timing and crisis management. Visceral Games Australia closed in late 2011, leaving many of us looking for work at other local studios or heading overseas.

After a year or two of hopping around at other studios, we realised a few things. For starters, there was a distinct lack of PC and console development happening in Melbourne, yet this was where we all wanted to work and live. Secondly, working with different teams made us realise that our experience and rapport with each other was an incredibly valuable asset. Finally, were all in a good financial position to take a risk, plus there were some amazing game engines for PC/console coming out that looked very promising. So we bit the bullet, and started our own studio.

– How did the idea of Space Dust Racing come about?

Our generation grew up with couch multiplayer. Crash Team Racing, Mario Kart, Micro Machines and Mashed were some of our favourites. Yet everywhere we looked, we saw games pushing online multiplayer, perhaps because it’s still a relatively new possibility in the grand timeline of gaming, and it’s certainly more convenient in many ways.

But Nathan found himself regularly digging out his PSX to play kart racing classics with his kids, and I was still hosting regular co-op Mashed nights with my friends, despite being well into my 30s. We did some research to see if any recent titles filled a similar gap, but most of them focused on split-screen or online multiplayer.

We decided to go all out and make a top-down party racer for a huge number of local players. We love science fiction, so we decided to go with a “brutal cute” art style that would appeal to both family gamers and core gamers.

sdr-with-characters

– Did you consider how you were going to monetize when you conceived the game?

We did, absolutely. Space Dust Racing is a couch party game which involves fast-paced rounds and (probably) lots of shouting. Because of this, we wanted to keep the transaction model simple for individual players, who are most likely going to be mashing buttons to make the game start as quickly as possible, so we decided to go with an upfront premium business model. The host simply pays for the game, and the controller application is free for other players.

We’re also planning to add some downloadable content based on community feedback to give gamers some extra bang for their buck, but this will follow the same host-only purchase model. Couch co-op is great for monetizing, as guests are essentially getting a free demo of the game, and if they like it they can buy it for themselves, then host their own gaming nights, and the process repeats with their friends.

– How do payments affect how you add content to your game?

If we were making a freemium game, we’d actually be in a better position for generating content. Microtransactions are great not only because they provide a more steady stream of income, but because they also tell you what players are interested in, and what’s not working in your game. We won’t have that luxury, which is why we’re pushing hard to grow a Space Dust Racing community early, for example through our developer blog at http://blog.spaceduststudios.com. We want to get feedback while we can still do something about it.

– What sorts of payment systems do you use and what really works best for you?

We’re currently building the vertical slice for PC but we haven’t locked down our final target platforms, so I can’t say for sure. What I do know is that as developers we’re looking for an all-in-one payment solution that handles everything from analytics, reporting, easy game integration, and acceptance of all major regional currencies through to 24/7 customer support.

SpaceDustStudios01_1920x1080

Beautiful in-game artwork developed by the Space Dust team. Get hyped!

– Do you have any other projects coming up?

We do indeed! Bubbling away in the background is a concept for a more hardcore-oriented sci-fi airborne arena shooter, and we have some free HD concept art available for this at our website http://www.spaceduststudios.com. I’d love to provide more details but we’re not quite ready to push this baby out of the nest yet.

– What have you learned from your Space Dust Racing?

Our first project has taught us a lot already. You need to be organised and highly efficient as a small independent game studio, because time is scarce when you’re the content creators and the business managers!

– What advice do you have for any other developers who are interested in putting out games?

We’re spoiled for choice when it comes to game engines these days, so just pick one – it doesn’t matter which one – and learn it inside out. Get involved in as many online game dev communities as you can find, and don’t be afraid to share your ideas and ask questions, as this is the best way to learn. Don’t fall into the trap of trying to make an MMORPG for your first title. Instead, start with very simple projects like Guess The Number, Tic Tac Toe or Hangman, then work your way up to more ambitious concepts like 2D shooters or puzzle games. Leave 3D and online till last, as they’re the hardest to get right.

Conceptual 3D Mesh images of various in-game weapons giving us a small taste of what’s to come

Conceptual 3D Mesh images of various in-game weapons giving us a small taste of what’s to come

The hardcore sci-fi airborne arena shooter in pre-production will have an official announcement soon with more details to come. Keep updated on their production and news updates at http://www.spaceduststudios.com/ and make sure to stick around on our blog to read more insightful articles into the mind of game developers in this generation!


To leave a comment, head on over to the Xsolla article:
http://blog.xsolla.com/2014/06/23/interview-veteran-developers-gone-rogue-team-space-dust/

Unity vs Unreal: Choosing an engine for Space Dust Racing – Part 1: Adventures in Unity

Unity_Pri

Ahh… finally, a break from the beautiful artwork our awesome art team has produced, it’s time to break out the programmer art. I promise this short series of posts will be nice and dry, as I cover our take on Unity vs Unreal Engine, including some of the reasons why we switched from Unity to Unreal for Space Dust Racing, and also some of our experiences with both engines so far. This post may get a little technical in places, so feel free to skim!

Back in the day…

Prior to starting Space Dust Studios, the majority of our experience was working with cross-platform in-house engines. Depending on who you ask, this was both a blessing and a curse. On one hand we often wished the technology was more mature, but on the other hand we were lucky to be close enough to the underlying technology to make changes or add functionality, and were always able to solve any issues we faced.

Moving to a startup environment meant we were faced with a very different situation. We weren’t ready to licence a million dollar engine, and building our own engine did not align with our resources or goals. UDK and the CryEngine FreeSDK – while great in parts – seemed to be severely limited in the kind of rendering technology you could implement on top of them. We needed something more flexible for the type of environments and effects we were looking to create.

44554020

Unity

I had started looking at Unity 3.5 and after not too long, Unity 4 was out sporting their new deferred DirectX 11 renderer which was architectured in a way not too dissimilar to what we were used to.

The Butterfly Effect video was released at the same time and looked great. It featured their DirectX 11 rendering, with additional features such as physically-based shading, HDR image based lighting, improvements for character animation, hair, skin, clothes and other rendering features (though unfortunately many of these haven’t been released in the 4.x development cycle).

Apart from an underlying inflexible rendering core (which as far as we could tell, doesn’t allow for techniques such as proper stencil-optimized deferred decals, reduced resolution lighting, custom shadow attenuation, TXAA? and you can’t really create your own custom light types or uber lighting shaders – to name a few), there is still a fair bit of leeway for customized rendering, as you do get access to additional render targets, the ability to write shaders, you can modify a few key internal shaders, generate procedural geometry, and it has support for a native plugin system with which you can start calling on the DirectX device directly.

Unfortunately parts of the feature set seemed only surface deep. Many of the DirectX 11 features were not fully exposed and difficult or error prone to get working via a native plugin. We quickly came across the lack of support for floating point textures, 3D texture importing, limited texture addressing & filtering support (UVW), there was support for Dispatch but not for DispatchIndirect, limited support for multiple render targets (we needed 6-8 MRT’s for certain effects but Unity artificially capped it at 4) as well as plenty of bugs, some of which would make it unshippable for us without support or source access to fix them.

“We license Unity source code on a per-case and per-title basis via special arrangements made by our business development team. As this can be quite expensive, we do not generally license source code to smaller operations, educational institutions, nor to companies in countries which do not have adequate legal intellectual property protection.”

– From Unity’s FAQ on licensing source

An example would be that alt-tabbing in a fullscreen DirectX 11 deferred game would crash when returning to the game. I lodged a report with a test case project via the in-editor crash tool back in Unity 4.0.1f, then checked the issue tracker and sure enough it had been a reported issue back in Feb 2013 but is still yet to be fixed. (I did however recently see an acknowledgement of the issue in the Unity 4.5 release notes so hopefully that one will be fixed in the next release.)

Another example would be a sporadic issue on certain PC’s that meant that before even getting as far as the input/graphics configuration dialog (which happens before we are given control), it would become unresponsive… but not always.

Community

“630 thousand monthly active developers and nearly 2.9 million registered developers”

– From Unity’s public relations page

Thankfully the Unity community is fairly active in the forums (and generally just awesome), so for many of the other bugs we came across, we were not the first and often someone else had posted a workaround for the issue where possible (crashing on exit, add force oddity). For other issues, we knew we would probably need to reimplement a feature in a plugin to avoid them (XInput, I’m looking at you).

There are tons of plugins available on the asset store which have been created by the community. Some are free, some are licensed per seat and others licensed per project. To get set up comfortably, you really need to set aside a good chunk of cash for a bunch of these.

DIY

There are plenty of basic features which you end up building yourself and then share between your projects. Better tools to work with prefab hierarchies, a suitable object pooling framework, in-game debug menus and helpers to name just a few. Hopefully the next big release will start filling in these gaps.

In editor footage from an early volumetric rendering prototype built in Unity with a custom rendering extension plugin.

Due to the rendering limitations, we started crossing off features but also wrote a renderer extension plugin DLL to get around what issues we could that way. In order to set up the sampler state as required and provide the support for higher MRT’s and desired formats, we had to jump through a ton of hoops including calling through to the plugin in a fairly delicate way. It was an ongoing concern of ours that future Unity versions could cause implementation issues and would require a redesign or stripping out of features. In practice, thankfully, it made it through with only minor modifications between upgrades and we were able to strip back the plugin a bit as engine bugs were fixed. Alas, with access to the source, it would have been trivial from the start.

Overall

There were plenty of positive experiences while using Unity. It was great having diff-able text formats for assets. It was easy to get almost anything started and overall our prototypes came together nicely. Before all the big GDC announcements, I had implemented physically-based shading with planar and captured reflections in Unity, all fairly painlessly and I was looking forward to working in screen space reflections when I got back from GDC.

Footage from our physically-based shading prototype built in Unity. Flicks between two preset times of day.

I’ll leave this post here for now but before I do, I quickly want to mention that as part of the GDC announcements, Unity has announced that they are busy working away on Unity 5 (with physically-based shading and a bunch of new features). We expect it to be a fantastic release when it’s done.

Next time I’ll touch on our adventures at GDC and Unreal Engine 4.

Tools and processes for remote game development – Part 3: Security and backups

Our AutoBackup Python script.

Ahh, the corporate video game lifestyle. quad 24″ monitors, $1000 office chairs, free snacks, and beer o’clock. But when you and four colleagues throw that lifestyle away to pursue your indie game dev dreams, more often than not you can’t (and shouldn’t) rent a fancy office, because in case you weren’t paying attention, you’re now peasants and every cent counts. But with the right tools and processes, you can work together remotely on a tight budget.

In this three-part article, I’ll run through the tools and processes we use for remote game development at Space Dust Studios. Part 1 focuses on communication, Part 2 focuses on collaboration, and Part 3 focuses on security and backups. We’ve evolved this setup over the last 12 months and it’s working well for us, though we’re a team of five living in the same city, so your mileage may vary. If you’re working with a bigger team or are spread across different time zones, you may need to make some changes.

We’re always on the lookout for improvements, so please leave a comment if you’ve got suggestions!


Part 3. Security and Backups

By working remotely you’re pushing a lot of sensitive information into the cloud. It’s worth thinking carefully about security for every service you’re using, particularly if your company is going to be entering the public eye, which will also attract the attention of hackers (even if they are just 15 year olds).

Private vs public

Make sure with any private service you’re using that the information isn’t publicly available. It’s good practice to try and hack into your stuff from a fresh browser with nothing logged in, and by trying to follow internal email links on outbound emails. If you’re posting internal videos on YouTube, make sure they’re unlisted or privately shared, or better yet, upload your videos to Google Drive instead. You’ll get the same YouTube-style player without the risk of accidentally making it public on your YouTube channel.

2-step verification

Make sure everyone on the team is using 2-step verification where possible. This includes all Google and Apple services, as well as Dropbox. It adds an extra layer of security to your accounts, requiring a password and a code sent to your phone over SMS. You really don’t want someone getting into your company email archive in the cloud!

One potential gotcha with 2-step verification is travelling. If you’re attending an overseas conference or trade show, double-check with your phone company that international roaming is turned on before you head over there.

Virtual private networks (VPN) and SSH tunnels

If you’re too cheap for dedicated hosting (which we are), never expose a service on your local network (such as Perforce or VNC) directly to the web. Instead you can use a VPN to let team members log in securely to your home network, although personally I prefer using SSH as I can directly control which ports and services team members can access.

There are many free SSH servers and clients out there to choose from, although be careful of the licensing terms which may stipulate they’re for personal-use only.

Use an unusual (and high) port number for your SSH connection, instead of the usual 22 or 443, and opt for public key authentication over passwords, to prevent man-in-the-middle attacks.

Password salting

You’re going to be creating a lot of company logins for various online services, so be sure to use different passwords for each one. Prefer long passwords over short ones where possible. A simple solution for an easy-to-recall yet hard-to-hack password is to “salt” a master password: take the original password and add something different for each service, based on an easy-to-remember rule like “the last letter of the service name”. (Make up your own rule though.) Talking about password salting in detail is beyond the scope of this post, but you can read more about it here if you’re a big cryptography nerd: Secure Salted Password Hashing.

Automated backups

Maybe one day you’ll get hacked. Maybe a hard drive will fail. Maybe Google will go belly up and take all your email with them. Maybe an employee will accidentally delete the entire contents of your master server. Whatever the cause, you really need your own on-site company backup solution for disaster recovery.

The easiest backups are the ones that happen automatically. We wrote a cheap-and-nasty Python script (why am I so disparaging towards my Python scripts?) that backs up the contents of our Dropbox, Google Drive, Perforce, Trello, mailboxes, and our websites plus their MySQL databases into DVD-sized password-protected RAR files. Make sure you add a RAR data recovery record so the archive can handle some data corruption, and copy the files onto multiple physical media (ideally of different types) as part of the automated backup process.

They’re not cheap, but you can get external RAID hard drive enclosures to protect against hard drive failure. We use my personal WD MyBook Studio II, which has 2TB storage mirrored in RAID-1 across two x 2TB drives.

Off-site backups

We also create manual off-site backups to protect against fire and theft by copying the RAR files onto a USB key, then stashing that in a waterproof bag in my garden shed. It all sounds very cloak and dagger, and it is, so just roll with it and pretend to be James Bond while your partner watches on, shaking her head pitifully. Another option would be auto-uploading the RAR files to an FTP server, but our total backup size is already at 20GB, and my broadband upload speeds aren’t that great, so that’s out for us.

Restoring

It’s great to have backups, but have you actually tried restoring your data from them? It’s worth taking the extra time to do this, even if you just restore to a different location for testing purposes. Otherwise the backups are useless, and you may as well have done nothing. Iron out the glitches at the very beginning, not when you’re up the proverbial creek with the next milestone due in 24 hours.


Everything outlined above is working well for us now, but as we ramp up our team and projects we’ll most certainly need to get office space. But for that awkward period between starting your company and bringing home the bacon, hopefully the tools and processes I’ve covered here will help get your team and project moving along for very little financial investment.

 

Have we missed something? Please let us know in the comments and we’ll add it to the post!

Older posts

© 2024 Space Dust Studios

Theme by Anders NorenUp ↑