This is the secret place where sponsor information will reside.

Do you want your company or brand be shown and advertised for here?

Contact me via kNewsLukas[at]outlook[dot]com

DeepJeb Lives!

Sort of… but let’s begin from all the way back, a whole week ago. I challenged myself to learn how to code an “A.I.” this year. That’s - next to 1440p - one of my new year’s resolutions! (haha)

So where do I begin… Let’s begin to get a very basic understanding of how modern A.I. or neural nets work. They are actually not that modern and used for decades to filter your emails from spam. It’s actually very hard to code a function or algorithm, that can detect such malicious messages accurately. How would you even do this? Imagine what kind of a responsibility you, as a spam filter developer have, filtering out millions and billions of emails no one will ever read! How often do you check the spam folder? What if someone misses something important? It seems all to work fine as if there was someone actually filtering your messages manually. That was probably the case at some point in the past, but not at this day and age anymore!

Computational neural nets are old!

Today, we use trained neural nets on the computer or computational neural nets (CNN 😜). It sounds fancy and sometimes even scary, but all it is, is a huge field of numbers. These numbers are so called weights and their purpose is to weigh the importance of a certain connection into a node. This node is also called a neuron and there can be a arbitrary number of these in a neural net. The typical ant has about 250,000 of these and similar to the computer, it receives some inputs and can give off a massive neuronal burst once a certain threshold is reached. DeepJeb’s neuron simply sums up all its inputs, put them into a function and the result is of course the input for the next neurons in line. An actual neuron in a brain is probably a little more advanced that this as the signals it receives are more complex than just numbers. An electrical impulse can be high or low, but also have an entirely different shape. Be longer, thicker, faster and so on. The simplified version on the computer works nonetheless.

The typical neural net on the computer is structured layer by layer from the inputs to the outputs of the whole system. Each input has a desired output and the challenge in designing a neural net, is to keep it delivering the right answers for questions it was not trained with. In other words, a neural net is only of value if it can produce the right outputs for unknown inputs. But what does that training mean?

Computational Neural Net Example

The act of training a neural net is to simply tweak all the randomly assigned weights from before. In the beginning it is just randomness. You use the computer to fill all the weights with a randomizer. Then you develop an algorithm than goes from node to node to tweak each weight a little and check how it impacts the output. A good tweak is kept, a bad tweak discarded! So long until a known input matches a known output. It’s that simple! Once at this stage the neural net can do exactly this one task. Nothing else - sadly.

This is where the hard part begins. You feed the neural net more and more input-output pairs and go on until it can solve all of them without - and this is the key - changing any weights in between. A trained neural net is static, just as mentioned a huge field of numbers that can do all the jobs you ask it too. Once a neural net is trained there is no big calculation needed anymore. The intensive computation is tweaking all the weights thousands of times per seconds until it works. The more neurons there are, the more computations are needed.

So another key element of any neural net is to generalize. Instead of developing a neural net that can detect a sentence, you develop a net that can detect individual letters. Than another one that can only detect combinations of letters, words, but not letters itself. Than yet another one that can recognize a sentence but not words and letters and the last maybe a paragraph. I hope you get the idea! The point of this is the more general something is, the better it can be applied to other tasks. Just think of human face. There is a part of your brain which can only detect faces. Not eyes, not a nose just whatever makes up the whole face. Neuroscientists like Nancy from Nancy’s Brain Talks have actually studied on tumour patients and they have found some strong evidence for this being the case. Stimulating the necessary part of the brain, led to patients experiencing a loss in the ability to recognize people by their faces. They had to look at specific things like their hairdo or their clothes to keep them apart. For science!

I bet you everyone of you has experienced, that the face-recognition in our brain is sort of like a generalized part, that can be used for other things. Cars for example! To me, every single car there is has a unique face to it. Some look evil, some cute and others extremely clumsy. I often find myself associating car faces to their drivers, but that’s maybe not such a good idea..

A bug or a feature?

Anyways, my first task for DeepJeb is a rather simple one compared to the task that lies before him - to steer a rocket. I chose colors, complementary colors to be precise. A color on a computer is usually made up of the numbers ranging from 0..255. 0 stands for absolute darkness and 255 for the opposite, brightness. The three numbers relate to the three colors Red, Green and Blue or RGB. To simplify things I chose to normalize these colors from 0…255 to 0…1. I found that calculating high numbers can quickly lead to some weird behavior and overflow errors. A number smaller than 1 multiplied with another number smaller than 1, will never explode in size as they tend to shrink rather then grow. This can otherwise quickly happen in a neural net with thousands of computations done. Then, to obtain a certain color back from a range 0…1, you simply multiply it by 255. As an example: 0.35 x 255 = 89.25 and the decimal is lost = 89.

I use Python for my coding because it is relatively simple to use. But other languages are also simple if you are super fluent with them. In C++ for example you have to deal with a lot more fundamental stuff like assigning memory to variables and making sure not used addresses are given free again. Again, dealing with potentially millions of values you really want to be sure you have no memory leaks which would crash your software. Python takes care of all the fundamental things so I can focus on my algorithms. Since I know C++ I try not to rely too much on the Python candy, so that I can easier port my code to C++ once it’s done, to achieve a better performance.

Back to my colors: A complementary color has multiple definitions and I use the simplest one which is just “1 - color”. Again an example as this is much simpler to explain that way. The complementary to an RGB color (255,130,50) would be (0,125,205). It’s in reverse basically. 255-255=0; 255-130=125; 255-50=205. This simple task allows me to generate as many inputs and outputs I like. I generate one, tweak the neural net until the output matches and take another one. I repeat the whole process until it can match a few of these at once. The more input-output pairs there are, the longer it of course takes.

Better DeepJep Neural Net Restuls

As you can see in the video, the best accuracy I have reached so far is 66%. I test DeepJeb’s neural net 10,000 times with random colors and calculate the difference between its answers, and the output it should’ve given. 66% sounds not that great but that’s 2 out of 3 colors on average. I don’t think I launched 2 out of 3 rockets correctly in KSP for all the rockets I have launched in the past. This example might let you question the usefulness of such a neural net and you are right. Answers that can be directly calculated like these, are not where neural net shine. Your calculator will always be faster than your own brain! The part where the calculator fails are complex problems where one can not find an answer that easily. In order to control a rocket for example dozens of control theory engineers develop guidance computer software that will not fail under ay circumstance. It is these complex paths engineers develop control algorithms on, that can be circumvented using neural nets. It quite frankly means you replace manual work with a lot of computation. A neural net can find the answer by trial and error in a virtual environment. If you choose the input and outputs correctly, you can afterwards throw it at reality and it will perform as well if not better than manual solutions. What this in my opinion will lead to is a lot more and a lot better solutions in the long run. Especially if you think about almost impossible to program tasks like autonomous driving.

But of course, from here on out it will only get more intense, as I have to find a good interface between KSP and DeepJeb using the mod kRPC. The questions I ask myself in particular are: What values should be inputs and what values outputs? How can I let DeepJeb play KSP over and over without it taking decades to accomplish? At this point I have absolutely not clue if it is even possible on a PC, but I know that DeepJeb will at least achieve to let rockets crash in all sort of ways, ways I have never crashed before. Some of you may ask: Why have you choosen kRPC over kOS? The answer is quite simple: kOS takes control of the rocket ingame whereas kRPC takes control of the game itself. DeepJeb must be able quickload and try it again and again without restarting itself and lose progress. I could let it write on files and load files over and over, but doing this would only wear my computer’s drives. So it’s best if it’s independent from the game. Using no third party mods I had to write a plugin myself, which would allow me to access ingame variables similar to how kRPC does. So a big shoutout to djungelorm for creating this awesome mod and excellent documentation. If you want to know more about the advancements in the creation of “A.I.” I can also highly recommend to read the papers published on OpenAI. It’s always nice to see that the problems hobbyists have are often also topics for professionals in one way or the other.

I hope you liked this little logbook entry. I will of course keep you updated about my developments and I will also make a real video once there is more to share in terms of KSP. Numbers are great and all, but I’d really like to show some nice explosions in KSP as well!

*Lukas