I am still trying to write an AI.

The first part is here: Making an AI: Part 1.

Today, I actually wrote some code. I did it all on my own (which probably wasn’t the best idea).

Here, I’ll try to explain the idea behind the part that I wrote. If you are interested, I’ll explain my code bit by bit after.

Idea explanation:

AI tries to approximate a function. You give a lot of inputs, and it tries to produce the output of the function well. Usually, this is done with neural networks. In a neural network, there are a lot of layers. The first is the input layer, where you give it some inputs. You can send anything in as long as it can be represented by numbers. Then, each input node is connected to other, non-input nodes. When the input changes, the nodes also change. The input’s affect on the other nodes is scaled (multiplied) by a weight, and then a bias is added to it.

If you make the nodes only depend on its input nodes (scaled and biased), you will only be able to approximate a linear function. Because of this, you will have to pass it through an activation function, to make it able to approximate curvy functions.

Neural networks don’t only have to approximate one function. They can give lots of outputs, or lots of functions, and the output doesn’t even need to be 1D! Neural networks can also decide things, if you just find which output is the biggest and do an action that corresponds with that output. They can also do something in real-time, like playing a video game.

I’m trying to make something called deep learning. I am also making it so that the input nodes change only and all of the next layer’s nodes. The next layer would then change only and all of the layer after that’s nodes. It keeps on going, until you get to the end of the neural network, where the output nodes are.

There are also many different types of machine learning. Supervised learning is when the AI is trying to copy something to the best of their ability. There is a type of AI that tries to copy the real-world in natural selection and mutations. Reinforcement learning is where the AI gets or loses rewards for doing something.

Code explanation:

Initializing things

import numpy as np

layer_nodes = [3, 2, 5, 2]

colors = np.array([[1 ,1 ,0],[1, 0, 1],[1, 0.91, 0],[0.522, 0.388, 1]])

answers = [1, 0, 1, 0]

The imports import libraries. numpy is a library that makes it easier and quicker to do advanced math, like really big nested lists and linear algebra.

Here, layer_nodes represents the amount of nodes (probably not formal language) in each layer of the neural network. These numbers are random, for now.

The colors list represents the inputs to test the AI. Once you run it through the AI, It will give the answers it thinks is right. Right now, I’m trying to get the AI to detect if the rgb color is more yellow or purple.

Then, there’s a list for the actual answers. Later, we can compare this to the AI’s current response and then modify the AI’s brain to make it give more accurate answers. The 1 signifies yellow, while the 0 signifies purple.

Creating weights and biases

def create_random_weights_biases(feature_dimensions):
  # List of weight matrices. Each matrix_i contains weights for layer_i,
  # and is of shape (dim_(i - 1), dim_i).
  # There are dim_i different nodes,
  # and each of them needs to have dim_(i - 1) weights
  # to connect to all of the previous layers' nodes.
  ws = []
  # List of bias vectors. Each bias vector is of shape (1, dim_i).
  bs = []

  for layer in range(1, len(feature_dimensions)):
    w = np.random.random((feature_dimensions[layer - 1],
                          feature_dimensions[layer]))
    b = np.random.random(feature_dimensions[layer])
    ws.append(w * 2 - 1)
    bs.append(b * 2 - 1)
  return ws, bs

First, this code creates an empty list for the weights and biases. Then, it loops through however many layers there are, minus one (it doesn’t loop through the first layer). In each of those iterations, it creates a matrix of dimension “last feature dimension by current feature dimension”. This makes it so that each node, which there are feature_dimensions[layer] of, has feature_dimensions[layer - 1] weights associated with it, each of which changes the effect of one of the nodes from the last layer.

You might be wondering how that works, as there isn’t a clear line that makes all the sublists and sub-sublists in the matrix. The line that actually does that is np.random.random((parameter1, parameter2)). It makes an numpy matrix with dimensions: parameter1 by parameter2. Each value inside the sub-sublists is a random value from 0 to 1.

This part also makes random bias values.

Then, I multiply the both lists by 2 and then subtract 1, making them between -1 and 1.

The next part just actually moves the random weight and bias values into global lists, because the lists that we made in the function only exist inside the function.

wsandbs = create_random_weights_biases(layer_nodes)
weights = wsandbs[0]
biases = wsandbs[1]

Evaluating the output

This puts the input through the function to see what it actually thinks.

def test_result(test_inputs, layers_list, test_weights, test_biases):
  prev = test_inputs
  for i in range(len(layers_list)-1):
    prev = sigmoid_neg(prev.dot(test_weights[i]) + test_biases[i])
  return(prev)

The test_result function works like this: first, it takes the inputs and puts them in the variable (technically, list), prev. Then, it takes the dot product of prev and the weights, which basically multiply the (all of the) inputs by the weights. Then, you add the biases.

Finally, you need an activation function, and for this one, I used the sigmoid function. Basically, it just smoothly remaps everything to between 0 and 1. Nowadays, people use RELU, which is just max(0, number), but I’m kind of scared, because it seems like RELU just allows your outputs to be as big as they want to be.

Then, I make prev a list of the outputs, or the inputs for the next iteration. Once all of the iteration finishes, I return prev, which is the final output.

Interpreting the outputs

result = test_result(colors, layer_nodes, weights, biases)
print(result)

for answer in range(len(result)):
  print("answer to #" + str(answer + 1) + ": " +
        str(np.where(result[answer] == max(result[answer]))[0][0] + 1))

This makes it easier for humans to understand what the output (a big, chuncky 2-dimensional list) actually means.

First, I actually call the function to evaluate the neural network. Then, I print the result. This is just for me to check if I don’t trust the last part. The last part is, wher I see what the neural network actually decides on. I make it

To do:

(literally the hardest parts)

  • Learn backpropogation (what makes the neural network learn)
  • Code backpropogation