Just wondered, are any developers working on a Neural Network extension?
Printable View
Just wondered, are any developers working on a Neural Network extension?
Interesting idea, what would you use this object for?
Neural nets are for training computers to make decisions. So NNets have been used to make cars that drive themselves, robots that avoid walls, enemies that learn your moves, stuff like that.
It can also be used to process 'fuzzy logic' (things where there's no clear yes or no answer, like speech or text recognition). They're based on the way brains work, rather than IF/THEN statements.
You feed them a bunch of inputs, and you get a bunch of outputs. Like a human has his senses as inputs, and his physical movements as outputs.
For me, I'd be interested to see if I could teach it how to learn its way around a map and then pathfind.
I'm not aware of any neural net extension in development. This might be the closest you are gonna get.
I heard it though the grapevine that one may actually be in development, but that's as much as I can say.
The question of availability is certainly not even applicable right now.
It certainly could be a cool new direction.
considering computers are made up of physical gates, it would seem that andthing like this would just be an emulation, using lots of if/then.
if you were doing it in a more specific context it would be possable; but it wouldent require an extension.
i'll make you an example when i get home.
:cool:
It's definitely nothing alike.Quote:
considering computers are made up of physical gates, it would seem that andthing like this would just be an emulation, using lots of if/then.
Basically, if you assign a 'brain' to a character, and give it a bunch of inputs (position, sensor states, positions of nearby creatures), clone that character, and let it's output control it's movement, it will in the beginning perform completely random nonsense...
Now clone this character.
From this point, every now and then you remove the instances which performed worst, and create a new generation of brains and characters based upon those which performed best, the new generation will perform the task slightly better. Repeat this process over and over again, and after a bunch of generations the characters will actually have found their own solutions and ways to perform the tasks.
The whole process can be automated. Set up the characters, give the brains inputs and outputs, and think of a bunch of simple tasks which you 'reward' the characters if they complete. Run the simulation and leave the computer on for a few days... When you get back, they have figured out by themselves how to perform the tasks.
The entire difference between this, and using lots of If/Else, is that you only program how it can move, what it can do, and what it can see. Where it actually moves, and what it actually does are things that it learns by itself...
All I know is that one object was in development, but it isn't any longer. I watched some examples of it though, and it indeed worked.
Quote:
it can also be used to process 'fuzzy logic' (things where there's no clear yes or no answer, like speech or text recognition). They're based on the way brains work, rather than IF/THEN statements.
i was merely pointing out that in the end computers are nothing but if/then gates.Quote:
considering computers are made up of physical gates, it would seem that anything like this would just be an emulation, using lots of if/then.
the closest thing would be ADC's or DAC's but even they can be emulated with if/then.
this example might take a lil while... i'll keep you up to date.
The actual logic isn't gate based, it's value based. The values are recorded as binary numbers, obviously, but that's where the need for gates ends. It's based on the way the brain works, with neurons and patterns and stuff.
I don't really understand the maths of it, but the gist is thus:
The network receives a bunch of inputs, like this:
SPEED: 50px/s
OBSTACLE DETECTOR A: 1
OBSTACLE DETECTOR B: 0
ANGLE: 25
And it does some function on them. So for argument's sake, we'll just add them together (so we get 76), I dunno what function they actually apply, but it basically combines all the inputs.
It then has a set of outputs, like:
ANGLE and
SPEED
So 76 gets sent to both outputs.
It bumps into an object, and your code senses this and says "Bad neural network!". So it tries to adjust the output by adding a weight to the neurons. So suddenly, the maths looks like this:
SPEED: 50px/s
DETECTOR A: 1
DETECTOR B: 0
ANGLE: 25
WEIGHT A: -27.5
Now the result is 48.5
If it now misses the obstacle, it thinks "Oo, I did that fairly well!" so its behaviour is reinforced (the weights become stronger). If it fails, it adjusts the weights again.
That's not how it ACTUALLY works (in other words, they don't just add up the inputs), but hopefully it shows how weights affect the output. Gradually the weights become more finely tuned.
The idea is that it receives input, modifies the input by the weights it has learned, and returns a set of output values that can be directly applied to its behaviour (e.g. speed of the movement, angle, rotation, or even a state like 'attack' or 'flee').
So if a car has the following values:
FRONT SENSOR: 1 (OBSTACLE)
LEFT SENSOR: 1 (OBSTACLE)
RIGHT SENSOR: 1 (OBSTACLE)
REAR SENSOR: 0 (CLEAR)
Then its weights need to adjust the value to '-1' for both wheel motors, or literally 'Move backwards in a straight line'
computer values are as you say are in binary but binary values are emulated as well there simply sets of on or off states.
at the core of computers there is only; if/then
btw, this example is not working very realistically... I'll keep on it.