A demo for Artillery Royale is planned for the end of September.
At first, this demo would have been a two-player demo only, but quickly I realized that this won’t make much sense for most of the players because right now there is no network support, nor enough players.
So if you want to play the demo you’d have to be two in the same room, playing turn by turn (something that is intended when the game will be done but probably not ideal for a demo where I want quick iteration and feedback loop).
On another hand, I always thought basic AI would be a pain to code and not fun to play with (because it’s based on a set of rules, the player can understand and predict them quickly).
So what was the solution?
Fortunately, we are at a time when you can use “real” AI in your games now. Real like the one in automated cars, or the one which won at the Go game. That kind of real. The one that you can train yourself giving rewards to get a neural network in return. The one that is mostly unpredictable, creative, and fun to interact with.
I mean, that’s the theory.
That being said, it’s still a hard topic. AI development is fun to play with but hard to get right. And to be honest, I’m a total noob in this area. I understand the basics and how it works as a whole but implementing it is something else.
Fortunately, Unity has some good pieces in place to help you start, they included a good toolkit (API) and some tutorials too. I thought it will be easy to apply their example to my specific problem but OMG that was way harder than I thought.
I had a first very naive approach, using what I’ve just learned in a good tutorial and thinking that it will be quite easy to apply to my own problem. Not true. I had to refactor all the code first to make it work with multiple environments (but that’s a detail), I’ve also had to think hard to get a good rewarding system and implement it. Find how to express AI objectives and translate that to code.
When this was done, I had some AI training going on, and to be honest it looked like it could yield some result. But I found another problem: computational power. My old MacBook Pro is, well, old and does not have the needed CPU power to train an AI model in a reasonable time. After a lot of internet searching, I found out that my objective is too complex for my AI model anyway (no matter the power of my computer).
Note on the hardware: at some point, I was looking to pimp my MacBook with external GPU thinking that it would help. I discovered that it won’t. Tensorflow the software used behind the scene uses Nvidia GPU only (and fallback to CPU otherwise). Unfortunately, Nvidia and Apple are at war (and thus not compatible). So it won’t help.
Getting ready for part two.
Today I designed a new way of getting an AI to work. After my search, I found out that people split their objectives in small chunks and train multiple brains. Then they add some code to switch those brains during the gameplay. I’m hoping that way it will work.
Also, Unity contacted me because they have some AI Training Cloud in their roadmap, and they may provide that service in the future. Hopefully, they will let me try it soon.