This week my partner had design a first sample for mobile robot so had to create a programming for the circuit to show a basic movement of the robot such move left, right, forward, reverse and stop.
Figure below show the circuit design:
And this is the programming for the circuit:
#include <16f84a.h>
#fuses HS,NOWDT,PUT
#use delay(clock=20000000)
void main()
{
while(TRUE)
{
if(!input(pin_a0))
{
output_b(0x68);
delay_ms(3000);
output_b(0x6A);
}
else if(!input(pin_a1))
{
output_b(0x62);
delay_ms(3000);
output_b(0x6A);
}
else if(!input(pin_a2))
{
output_b(0x6A);
}
else if(!input(pin_a3))
{
output_b(0x74);
}
else if(!input(pin_a4))
{
output_b(0x00);
}
}
}
In this programming, i had use if-else function for the selection of input switch and the output controlled by sending a data to the whole port.
Thursday, October 14, 2010
Saturday, October 9, 2010
Week 11 Multilayer Perceptron
This week i learn about multilayer perceptron, where a single layer perceptron only have an input layer and output layer, this multilayer layer perceptron have another layer inbetween input and output layer which is hidden layer.
Figure 1 : Multilayer perceptron
The Multilayer Perceptron
The multilayer perceptron (MLP) is a hierarchical structure of several perceptrons, and overcomes the shortcomings of these single-layer networks.
The multilayer perceptron is an artificial neural network that learns nonlinear function mappings. The multilayer perceptron is capable of learning a rich variety of nonlineardecision surfaces.
Nonlinear functions can be represented by multilayer perceptrons with units that use nonlinear activation functions. Multiple layers of cascaded linear units still produce only linear mappings.
Differentiable Activation Functions
The training algorithm for multilayer networks requires differentiable, continuous nonlinear activation functions. Such a function is the sigmoid, or logistic function:
o = s ( s ) = 1 / ( 1 + e-s )
where s is the sum: s=S i=0d wi xi of products from the weights wi and the inputs xi.
Sometimes s is called alternatively squashing function as it maps a very large input domain to a small range of outputs.
Another nonlinear function often used in practice is the hyperbolic tangent:
Sometimes the hyperbolic tangent is preferred as it makes the training a little easier.
Figure 1 : Multilayer perceptron
The Multilayer Perceptron
The multilayer perceptron (MLP) is a hierarchical structure of several perceptrons, and overcomes the shortcomings of these single-layer networks.
The multilayer perceptron is an artificial neural network that learns nonlinear function mappings. The multilayer perceptron is capable of learning a rich variety of nonlineardecision surfaces.
Nonlinear functions can be represented by multilayer perceptrons with units that use nonlinear activation functions. Multiple layers of cascaded linear units still produce only linear mappings.
Differentiable Activation Functions
The training algorithm for multilayer networks requires differentiable, continuous nonlinear activation functions. Such a function is the sigmoid, or logistic function:
where s is the sum: s=S i=0d wi xi of products from the weights wi and the inputs xi.
Sometimes s is called alternatively squashing function as it maps a very large input domain to a small range of outputs.
Another nonlinear function often used in practice is the hyperbolic tangent:
o = tanh( s ) = ( es - e-s ) / (es + e-s)
Tuesday, October 5, 2010
Week 10 Neural Network
This week i had learn about Neural Network where it is another method to apply artificial intelligent in a mobile robot programming.
Introduction
An artificial neural network (ANN), usually called "neural network" (NN), is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. It consists of an interconnected group of artificial neurons and processes information using a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network during the learning phase. Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs or to find patterns in data.
Introduction
An artificial neural network (ANN), usually called "neural network" (NN), is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. It consists of an interconnected group of artificial neurons and processes information using a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network during the learning phase. Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs or to find patterns in data.
How the Human Brain Learns?
Much is still unknown about how the brain trains itself to process information, so theories abound. In the human brain, a typical neuron collects signals from others through a host of fine structures called dendrites. The neuron sends out spikes of electrical activity through a long, thin stand known as an axon, which splits into thousands of branches. At the end of each branch, a structure called a synapse converts the activity from the axon into electrical effects that inhibit or excite activity from the axon into electrical effects that inhibit or excite activity in the connected neurones. When a neuron receives excitatory input that is sufficiently large compared with its inhibitory input, it sends a spike of electrical activity down its axon. Learning occurs by changing the effectiveness of the synapses so that the influence of one neuron on another changes.A simple neuron
An artificial neuron is a device with many inputs and one output. The neuron has two modes of operation; the training mode and the using mode. In the training mode, the neuron can be trained to fire (or not), for particular input patterns. In the using mode, when a taught input pattern is detected at the input, its associated output becomes the current output. If the input pattern does not belong in the taught list of input patterns, the firing rule is used to determine whether to fire or not.A simple neuron
A more complicated neuron
The previous neuron doesn't do anything that conventional conventional computers don't do already. A more sophisticated neuron (figure 2) is the McCulloch and Pitts model (MCP). The difference from the previous model is that the inputs are 'weighted', the effect that each input has at decision making is dependent on the weight of the particular input. The weight of an input is a number which when multiplied with the input gives the weighted input. These weighted inputs are then added together and if they exceed a pre-set threshold value, the neuron fires. In any other case the neuron does not fire.Figure 2. An MCP neuron
In mathematical terms, the neuron fires if and only if; X1W1 + X2W2 + X3W3 + ... > T
The addition of input weights and of the threshold makes this neuron a very flexible and powerful one. The MCP neuron has the ability to adapt to a particular situation by changing its weights and/or threshold. Various algorithms exist that cause the neuron to 'adapt'; the most used ones are the Delta rule and the back error propagation. The former is used in feed-forward networks and the latter in feedback networks.
Subscribe to:
Posts (Atom)