Just wondered, are any developers working on a Neural Network extension?
Printable View
Just wondered, are any developers working on a Neural Network extension?
Interesting idea, what would you use this object for?
Neural nets are for training computers to make decisions. So NNets have been used to make cars that drive themselves, robots that avoid walls, enemies that learn your moves, stuff like that.
It can also be used to process 'fuzzy logic' (things where there's no clear yes or no answer, like speech or text recognition). They're based on the way brains work, rather than IF/THEN statements.
You feed them a bunch of inputs, and you get a bunch of outputs. Like a human has his senses as inputs, and his physical movements as outputs.
For me, I'd be interested to see if I could teach it how to learn its way around a map and then pathfind.
I'm not aware of any neural net extension in development. This might be the closest you are gonna get.
I heard it though the grapevine that one may actually be in development, but that's as much as I can say.
The question of availability is certainly not even applicable right now.
It certainly could be a cool new direction.
considering computers are made up of physical gates, it would seem that andthing like this would just be an emulation, using lots of if/then.
if you were doing it in a more specific context it would be possable; but it wouldent require an extension.
i'll make you an example when i get home.
:cool:
It's definitely nothing alike.Quote:
considering computers are made up of physical gates, it would seem that andthing like this would just be an emulation, using lots of if/then.
Basically, if you assign a 'brain' to a character, and give it a bunch of inputs (position, sensor states, positions of nearby creatures), clone that character, and let it's output control it's movement, it will in the beginning perform completely random nonsense...
Now clone this character.
From this point, every now and then you remove the instances which performed worst, and create a new generation of brains and characters based upon those which performed best, the new generation will perform the task slightly better. Repeat this process over and over again, and after a bunch of generations the characters will actually have found their own solutions and ways to perform the tasks.
The whole process can be automated. Set up the characters, give the brains inputs and outputs, and think of a bunch of simple tasks which you 'reward' the characters if they complete. Run the simulation and leave the computer on for a few days... When you get back, they have figured out by themselves how to perform the tasks.
The entire difference between this, and using lots of If/Else, is that you only program how it can move, what it can do, and what it can see. Where it actually moves, and what it actually does are things that it learns by itself...
All I know is that one object was in development, but it isn't any longer. I watched some examples of it though, and it indeed worked.
Quote:
it can also be used to process 'fuzzy logic' (things where there's no clear yes or no answer, like speech or text recognition). They're based on the way brains work, rather than IF/THEN statements.
i was merely pointing out that in the end computers are nothing but if/then gates.Quote:
considering computers are made up of physical gates, it would seem that anything like this would just be an emulation, using lots of if/then.
the closest thing would be ADC's or DAC's but even they can be emulated with if/then.
this example might take a lil while... i'll keep you up to date.
The actual logic isn't gate based, it's value based. The values are recorded as binary numbers, obviously, but that's where the need for gates ends. It's based on the way the brain works, with neurons and patterns and stuff.
I don't really understand the maths of it, but the gist is thus:
The network receives a bunch of inputs, like this:
SPEED: 50px/s
OBSTACLE DETECTOR A: 1
OBSTACLE DETECTOR B: 0
ANGLE: 25
And it does some function on them. So for argument's sake, we'll just add them together (so we get 76), I dunno what function they actually apply, but it basically combines all the inputs.
It then has a set of outputs, like:
ANGLE and
SPEED
So 76 gets sent to both outputs.
It bumps into an object, and your code senses this and says "Bad neural network!". So it tries to adjust the output by adding a weight to the neurons. So suddenly, the maths looks like this:
SPEED: 50px/s
DETECTOR A: 1
DETECTOR B: 0
ANGLE: 25
WEIGHT A: -27.5
Now the result is 48.5
If it now misses the obstacle, it thinks "Oo, I did that fairly well!" so its behaviour is reinforced (the weights become stronger). If it fails, it adjusts the weights again.
That's not how it ACTUALLY works (in other words, they don't just add up the inputs), but hopefully it shows how weights affect the output. Gradually the weights become more finely tuned.
The idea is that it receives input, modifies the input by the weights it has learned, and returns a set of output values that can be directly applied to its behaviour (e.g. speed of the movement, angle, rotation, or even a state like 'attack' or 'flee').
So if a car has the following values:
FRONT SENSOR: 1 (OBSTACLE)
LEFT SENSOR: 1 (OBSTACLE)
RIGHT SENSOR: 1 (OBSTACLE)
REAR SENSOR: 0 (CLEAR)
Then its weights need to adjust the value to '-1' for both wheel motors, or literally 'Move backwards in a straight line'
computer values are as you say are in binary but binary values are emulated as well there simply sets of on or off states.
at the core of computers there is only; if/then
btw, this example is not working very realistically... I'll keep on it.
I think you are confused, here is a good tutorial explaining NN's in more detail: Neural Network Tutorial
The main thing you are assuming is that they work just using simple if/else statements. They really work using interconnected nodes (called neurons) that trigger each other by modifying the weight of each input neuron and comparing it to a threshold value. By adjusting the weight of each node's input we can change the behavior of the system.
These would be hard if not impossible to simulate in MMF 2 and definately not easy to do through events.
I would like to see such an extension but it would take a great deal of compentancy to use them correctly and it is not for the average joe user.
The hard part about using NN's figuring out how to reduce your problem into input and outputs and figuring out which outputs are better then other outputs so that you can train your NN to actually solve the problem.
The main thing I dislike about NN's is that they are a blackbox solution to a problem as once your program converges on a solution you have no idea how the solution really works, you just know that it does... It is like having an unlabeled machine that turns lead into gold. You can feed it lead all day long and get gold but you have no clue how it works, and if it breaks then your going to have an extremely hard time trying to get it working again.
To make matters worse the machine may be horribly inefficent. It might be that the process it is using is very wasteful and does a lot of redundant things like taking the lead, turning it into gold, back into lead, into copper, and then into gold. So if you want to simplify your machine you are now also stuck :\.
I think what SEELE is saying is that this can be done directly in MMF. He is of course right here, and that is why he was talking about logic gates and such. (Actually you need a little bit more than If/Then statements. For instance, you need to be able to do something which represents adding 1. This is of course possible in MMF).
So what he is saying, I think, is that a NN can be made in MMF. I am not sure why anybody is arguing with that.
In the end that's all the human brain is actually doing, if/then and is based on 5 senses of input. I think what he's saying is that how the input is processed emulates the grey area we call "fuzzy logic", where no specific if/then is the correct output, or listed in a priority choice list of output. Rather, having a set of output processes based on successful and failing output of previous encounters of similar input data that expands with each new encounter. That's exactly what our brain does. And when we encounter a new experience of input, we attempt to make the best choice based on previous similar experience and also use core primal instinct that drives emotions such as fear. To be even more realistic, we all make bad choices as well which we hopefully learn from, so should the AI.
Either way, this sounds like a complex endeavor.
J. Weierheiser
I never said that it is completely impossible to make an NN in MMF 2, just extremely hard. MMF 2 is not well suited to the task of making interconnected neurons at all. Your best bet would be to use Lua and even then it is going to be a complicated task.
My main disagreement is the statement that there is no need for an extension to do this.
While it "might" be technically feasible to make this in MMF without an extension it is far from practical to do so. And if anyone thinks that I am wrong here then they are welcome to prove to me that this an be done natively in a full featured way through pure events. That means that I would like to see a typical network (feedforward, ect) that has atleast 3 layers and can be used to process a common NN problem such as the XOR problem The XOR Problem. Sure it can be done, but doing so will take a good bit of events and it won't be very easy to reuse at all ;) .Quote:
if you were doing it in a more specific context it would be possable; but it wouldent require an extension.
Good luck :) .
I was really rather hoping someone could use a freeware Neural Net library and build an extension to work as a frontend?
Sounds like it would be used a lot. So if this extension was actually made, could you give AI to a plattform movement, and how many things could you teach it?
Yeah you would just make the output of the NN control the velocity of the character. The hard part would be determining the inputs, but you could use a Genetic Algorithm to determine the most fit "population". Of course this is really a HORRIBLE idea as a finite state machine is a much more apt mechanism to use for a platform AI :P .
i found the most effective way to make the ai's 'work' is to give one or more of there inputs there distance from something... they quickly learned to avoid each other and get the apple(in my example)... but it could simple be written to work like that.
I don't see the point in this huge work around called 'NN'.
Neural Nets are only useful in cases where you can't describe the problem with simple rules.
They can be trained to be awesomely good at object recognition for example.
Dines; if you give me a scenario I'm sure i can make you an example.
Predicting the stock market!
(My first introduction to NN was a PhD piece on predicting the stock market using Neural Networks from Stanford university. Obviously it wasn't perfect but if you think you can do better I am happy to profit from it for you.. :P)
Ok Seele, I'm not certain this would work with NN, that's why I need an object to try it with.
A top-down maze is created, with one NPC character in it.
You choose a start point and an end point, and the NPC should ideally walk the path from one to the other.
The NPC has the following inputs and outputs:
I. X Start Position
I. Y Start Position
I. X End Position
I. Y End Position
I. Ticker
O. X Pos
O. Y Pos
To start with, you train it using back propagation. You set a start pos and end pos, then draw a line manually from start point to end point. As you draw, the coordinates are being sent to the AI as example output, along with a steadily increasing ticker value (so the ticker is just a counter going 1, 2, 3, 4, etc).
Ideally when it's fully trained, you should be able to set it a start and end point and then slowly increase the ticker value. As it increases, the object traces the path.
Once you have a few examples of the ideal output, you run a second training phase where many copies are generated and tested, using random coordinates. AIs are rated in fitness based on how many times they hit the wall, whether they reached their target in the time allotted, and how long they took.
Pathfinding using A* is a better way of finding a way around a maze, but this isn't really about this. It's trying to see if a computer can build an appreciation for space.
I think somehow you'd need some inputs relating to the maze (eg DistanceToWall Left/Right/Forward/Backward), and maybe some memory loops (outputs that feed inputs).
The Outputs should also be deltas, not positions.
I have a scenario that I think would be very easy to set up, and although it could easily be programmed, using NN would probably result in more unpredictable and interesting results.
Basically, create a bunch of tanks.
Input 1: Angle to closest enemy
Input 2: Distance to closest enemy
(and I suppose you could give it some more inputs related to the angle and distance to the second and third closest enemies)
Input 3-6: The states of four detectors (should tell in a very simplified way what the nearby scenery is like).
Output 1: Tank Rotation Speed
Output 2: Tank Speed
Output 3: Turret Rotation Speed
Output 4: Shoot trigger
Of course, there will have to be a limitation, so the Shoot trigger only works like once every second.
Now, create a bunch of tanks, let them shoot at each other. Award the tank which hits another tank, and remove all the ones which perfors worst every now and then, and replace them with copies of the best performing tank.
You should use a random tank from the older ones still alive, not the "best", otherwise you risk missing out on "better" tanks that were unlucky enough to be hit by another one.
My last neural net experiment (written in Jamagic no less, but I can't find it on the old forums, so I don't know if I posted it) made the mistake of only spawning new critters based on the "best ever", which meant it got stuck in a rut quite easily. There can quite easily be a hill of "not as good" between the best so far and the best possible.
Dynasoft: Yeah, I guess you're right about that :)
I do think that using NN in my example is a good idea. I'd really love to try it actually!
I wonder if a NN extension could be built around one of these projects...?
Hmm... I just noticed this thread now. I'd just like to say that I am currently considering 2 neural network extension ideas. I have the networks running already, but I lack the time to make them into extensions. Maybe if one of the extension developers can spare some time to help me, I can get something out soon.