Go to Top

Artificial Intelligence is used for more than just games and there is a lot that can be learned by understanding the different applications.

Motivation

We just learnt the big lines of Artificial Intelligence in the first part, we shall now see some applications  for AI in techonologies but also in everyday's objects.

Neural Network

To begin with, our knowledge is organized like a gigantic cognitive net very well structured. Each nod is connected directly or indirectly getting information from the closest ones in order to draw itself new information.

The structure is based on three levels, Abstraction, Semantic and Time.

Abstraction is the way we perceive everything. For the same object, my wife may tell me to move the car, the mechanic calls it a Ford Focus 1.6l 2005, the police officer asks me to move my vehicle andthe statistics call it a transportation mode. For one object, there are four ways to name it depending on the level of abstraction. The choice is done in order to optimize the communication, say the least to say the most. A concept is more abstract if it involves another. Here, Ford Focus 1.6l 2005 is more abstract than transportation mode.

Semantic describe the relation between objects. One object nomination is and can belong to a group and be used to perform an action. The same object nomination can also be totally different.

Components are found inside your computer. Your computer can be used for communication, work, or entertainment. Your computer can be a desktop or a laptop. Your laptop might be considered for work exclusively. Already we have started to define a network of relation that will create a neural network. For another person, the neural network might not include the computer into work but only for communication and entertainment.

Time is considered here as experience. It is our past experiences that will model our knowledge of the environment. We learn from our mistake or success and we try to avoid or reproduce them when a similar situation comes again. Putting your hand in the fire will create a reaction that our brain record and keep in memory. Next time we see fire, we have this recording that plays back in our mind and tells us that it is hot and dangerous. We can then evaluate the situation and draw the appropriate conclusion by avoiding the fire.

Biological neuron:

By the end of the 19th century, Alexander Bain and William James have come up with a theory that thoughts and body activities resulted from interactions among neurons within the brain.

Neuron

( Source of the diagram: NIBS Pte Ltd.)

The human brain is made up of billions of cells  also called neurons composed of a nucleus, synapses usually formed from dendrites and axon, each neuron varying in sizes and shapes and can be connected to many other neurons . All together, they perform the computation required for intelligence. This network is what researchers aimed at reproduce and simulate artificially.

At an individual level, a neuron has very little intelligence, in the sense that it operates by a simple set of rules, conducting electric signals through its network. However, the combined network of all these neurons creates intelligent behavior that is unrivaled and unsurpassed. So how does the neuron works? It is an input/output unit that can be excited or not excited.

Other neurons send signals that are summed up and compared against a threshold in order to determine if the neuron shall be excited (“fire”). After firing, the neuron needs to rest (usually a few milliseconds) and get prepared for a new firing. This period is called its refractory period.

The combined network of all these neurons creates intelligent behavior.

Some researchers think that some cells are triggered for special actions, as example, some respond when shown a picture of a grandmother believing there would be “grandmother” cells only responding to the picture.

Others think that each cell is predisposed to be used at a certain level for special actions in a more cooperative way: one can be used for particular memories and also for calculation but at a lower level when another was left apart for memories but is now used fully for math and so on, but this is not entirely proven yet.

Artificial neuron:

It basically aims at mimicking the structure and functioning of the neuron, to create intelligent behavior. Researchers are attempting to build a silicon-based electronic network that is modeled on the working and form of the human brain. Our brain is a network of billions of neurons, each connected with others.

Researchers created network of electronic analogues of a neuron, based on Boolean logic. Memory was recognized to be an electronic signal pattern in a closed neural network.

Messages are received, weighted (x1*w1) compared with a threshold equation. If result >= threshold, output == 1 else output == -1.

Perceptron

Perceptron (http://www.cs.trincoll.edu)

Artificial Neural Network (ANN):

Now we have a working neuron that can receive a signal, weight it, compare it with a threshold and fire an output or not but just like a biological neuron, it cannot do much on its own, it needs to be implemented in a network. When each of them performs a calculation that will in the end bring a conclusion, an artificial thinking can be created. It is the sum of all the binary 1’s and -1’s that will give a result.

The human brain learns to realize patterns and remembers them. Similarly, ANN has the ability to learn patterns and remember. This approach has its limitations due to the scale and complexity of developing an exact replica of a human brain, as the neurons number in billions! Currently, through simulation techniques, people create virtual neural networks. This approach has not been able to achieve the ultimate goal but there is a very positive progress in the field. The progress in the development of parallel computing will aid it in the future.

When creating a neural network, the neurons will start an evolution using feed-forward, a technique that consists in sending in a system again to the system as an input so that the system does not adjust the data but adjust itself to the data. In a case of feedback, the output is adjusted and sent back to the loop as an input according to the system settings.  After few processes, the neurons are able to recognize patterns of information.

Two phases are required:

  • The training phase.
  • The recognition phase.

First one is used to adjust the weights to produce a particular output on a particular pattern starting from arbitrary settings and changing values along the process.

The second phase comes once the needed outputs are reached, the weights are set, the pattern is sent again to the neuron and it recalls the output.

Principle of I/O (a), Feed-Forward (b) and Feedback (c) (http://en.wikipedia.org/wiki/Feed-forward)

ANN

Use of ANN:

A company plans to suggest a product to potential customers. A database of 1 million customer records exists; 20,000 purchases (2% response) is the goal. Instead of contacting all 1,000,000 customers, only 100,000 is contacted. The response to the sale offers is known; so the subset is used to train a neural network to tell which of the 100,000 customers decide to buy the product. Then the rest 900,000 customers are presented to the network which classifies 32,000 of them as buyers. The 32,000 customers are contacted. and the 2% response is achieved. Total savings  $2,500,000.

(This example is taken from http://ulcar.uml.edu/ )

ANN can also be used for memory allocation. When storing data in a hard drive or on a RAM, the computer allocates them randomly, using first one found. Depending on the type of information, ANN will store, retrieve and modified it based on the data itself as well as reconstruct a damaged data. This is called Content-Addressable and Associative Memory and Error-Correction and Partial-Contents Memory.

Expert System

Expert system technology is the first truly commercial application of the research and development work carried out in the AI field.

The goal of Expert System is to solve problems by applying knowledge that has been garnered from one or more experts in a field. But these experts are generally not programmers and they probably cannot express their results in a programming language. Expert systems come up with a representation that makes the link between the expert and the computer program -hence the name expert system- so that both of them can work together. Expert systems can be applied to many fields such as finance, heavy industries and space, aviation, weather forecast and many more.

  1. the creation of a knowledge base which uses some knowledge representation structure to capture the knowledge of the Subject Matter Expert (SME);
  2. a process of gathering that knowledge from the SME and codifying it according to the structure, which is called knowledge engineering
  3.  Once the system is developed, it is placed in the same real world problem solving situation as the human SME, typically as an aid to human workers or as a supplement to some information system. Expert systems may or may not have learning components.

We will now get a closer look at two examples, one in medicine and one in banking.

MYCIN:

In 1960 is developed Dendral (written in Lisp) at Stanford University by Edward Feigenbaum. Its primary aim was to study hypothesis formation and discovery in science. For that, a specific task in science was chosen: help organic chemists in identifying unknown organic molecules, by analyzing their mass spectra and using knowledge of chemistry. It began in 1965 and spans approximately half the history of AI research.

In 1974, Dr Edward Shortliffe wrote MYCIN, a derivation of Dendral. It was designed to prescribe antibiotic therapy for bacterial blood infections, and when completed it was judged to perform this task as well as experts in the field. Still, MYCIN was never used in practice as it raised an ethic and legal issue (Who is responsible in case of mistake, the software or the doctor?).

The following is an example of one of MYCIN'S rules:

Rule 52:

If

  1. THE SITE OF THE CULTURE IS BLOOD
  2. THE GRAM OF THE ORGANISM IS NEG
  3. THE MORPHOLOGY OF THE ORGANISM IS ROD
  4. THE BURN OF THE PATIENT IS SERIOUS

Then there is weakly suggestive evidence (0.4) that THE IDENTITY OF THE ORGANISM IS PSEUDOMONAS

Expert systems evolve as the knowledge grows in the field, MYCIN led to the development of the EMYCIN expert system shell. EMYCIN is a backward chaining rule interpreter.

Here is how backward chaining works:

IF X is green THEN X is a frog. (Confidence Factor: +1%)

IF X is NOT green THEN X is NOT a frog. (Confidence Factor: +99%)

IF X is a frog THEN X hops. (Confidence Factor: +50%)

IF X is NOT a frog THEN X does NOT hop. (Confidence Factor +50%)

If we had a second clause to the condition ( IF X is green AND small THEN) , we would have an process called interference rule. One advantage is that we get a way of thinking closer to human and it increments the cf.

Dealing with uncertainty:

EMYCIN deals with uncertainty by replacing the two boolean values, true and false, with a range of values called certainty factors. These are numbers from 1 (false) to +1 (true), with 0 representing a complete unknown.

To define the logic of certainty factors, we need to define the logical operations, such as AND, OR, and NOT. The first operation to consider is the combination of two distinct pieces of evidence expressed as certainty factors. We want to determine the chances of a patient having disease X.

The final complication is that rules themselves may be uncertain. Certainty factors can be seen as a generalization of truth values.

If in EMYCIN we only cut off the search when one of the premises was absolutely false, then we might have to search through a lot of rules, only to yield answers with very low certainty factors. Instead, EMYCIN arbitrarily cuts off the search and considers a premise false when it has a certainty factor below .2.

Mortgage expert system:

The expert system neither refuses nor grants loans, but it:

  • establishes whether all the conditions for granting a particular type of loan to a given client have been satisfied, and
  • calculates the required term of repayment, according to the borrower's
  • Means and the security to be obtained from him.

The goal is to produce applications which are correct in 80 per cent to 90 per cent of all cases, and transfer responsibility for granting or refusing loans to the branch offices.

The expert system provides the branch with a significant amount of assistance simply by producing correct applications for a loan. In many cases the client had to choose between different types of loans, and it was planned that expert system should enable bank employees to advise clients on the type of loan which best matched their needs. This, too, has been done and as such contributes to the bank employees' training.

The main tasks of expert system for mortgages focused on:

  • the speed of moving a loan through red tape, which management considered to be a very important factor;
  • the reduction of the errors made in the filling form;
  • The shortening of the turnaround time, which was too long with classical.

Simple expert systems constitute the first phase of a loan application for mortgage purposes. After a prototype is made, the construct should be presented to expert loan officers who, working together with the knowledge engineer(s) will refine the first model. But if there is no first try which is simple and understandable, there will not be complex real-life solutions afterwards.

Whether simple or sophisticated, an expert system for mortgages should be provided with explanation facilities that show how it reaches its decisions and hence its advice. The confidence of the loan officer in the AI construct will be increased when this is done in a convincing manner.

[http://en.wikipedia.org/wiki/Expert_systems_for_mortgages]

Genetic Algorithms

A genetic algorithm (GA) is a heuristic (experience-based) search that mimics the process of natural evolution. This heuristic is routinely used to generate useful solutions to optimization and search problems. Genetic algorithms are a part of evolutionary algorithms (EA), which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover. GA is inspired by Darwin’s theory about evolution.

For a detailed tutorial see: http://www.obitko.com/tutorials/genetic-algorithms/

Encoding:

Each chromosome has one binary string. Each bit in this string can represent some characteristic of the solution. Or the whole string can represent a number.

Of course, there are many other ways of encoding. This depends mainly on the solved problem. For example, one can encode directly integer or real numbers; sometimes it is useful to encode some permutations and so on.

Chromosome 1 1101100100110110
Chromosome 2 1101111000011110

Crossover:

Crossover selects genes from parent chromosomes and creates a new offspring. The simplest way how to do this is to choose randomly some crossover point and everything before this point point copy from a first parent and then everything after a crossover point copy from the second parent.

Crossover can then look like this ( | is the crossover point):

Chromosome 1 11011 | 00100110110
Chromosome 2 11011 | 11000011110
Offspring 1 11011 | 11000011110
Offspring 2 11011 | 00100110110

There are other ways how to make crossover, for example we can choose more crossover points. Crossover can be rather complicated and very depends on encoding of the encoding of chromosome. Specific crossover made for a specific problem can improve performance of the genetic algorithm.

Mutation:

After a crossover is performed, mutation takes place. This is to prevent falling all solutions in population into a local optimum of solved problem. Mutation changes randomly the new offspring. For binary encoding we can switch a few randomly chosen bits from 1 to 0 or from 0 to 1. Mutation can then be following:

Original offspring 1 1101111000011110
Original offspring 2 1101100100110110
Mutated offspring 1 1100111000011110
Mutated offspring 2 1101101100110110

The mutation depends on the encoding as well as the crossover. For example when we are encoding permutations, mutation could be exchanging two genes.

We now know a little about what it does, let’s see then the process and how results are obtained.

Process:

  • First step: Initialization

Many individual solutions are randomly generated to form an initial population. The population size may vary from several hundreds to thousands depending on the nature of the problem. Traditionally, the population is generated randomly, covering the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found.

Algorithm is started with a set of population (called chromosomes or the genotype of the genome).

  • Second step: Selection

During each generation, solutions from one population are taken and used to form a new population hopefully evolving toward better solutions. Solutions which are selected to form new solutions (offspring) are selected according to their fitness or in accordance to the population goal.

  • Third step: Reproduction

The evolution usually starts from a population of randomly generated individuals and happens in generations. The fitness function is defined over the genetic representation and measures the quality of the represented solution. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are selected using probabilistic methods from the current population (based on their fitness), and modified (recombined and possibly randomly mutated) to form a new population, those are “parent pair”. The new population is then used in the next iteration of the algorithm and then improved through repetitive application of mutation, crossover, inversion and selection operators.

  • Fourth step: termination

Commonly, the algorithm terminates when a maximum number of generations has been produced, the amount can be reached naturally or predefined, no better results are occurring, a satisfactory fitness level has been reached for the population. If the algorithm has terminated due to a maximum number of generations, a satisfactory solution may or may not have been reached.

The advantage of GA lays in the fact that you can see the possible evolution of a species over long period of time only by computation.

An example, we want to see how a bacterium evolves in a particular environment but the changes would happen over years, GA can give an accurate approximation in short time.

Genetic algorithms find application in bioinformatics, phylogenetics, computational science, engineering, economics, chemistry, manufacturing, mathematics, physics and other fields.

To get a better understanding of GA, you can refer to:

http://www.rennard.org/alife/english/gavgb.html This one holds a nice Java applet with a tutorial.

 Functional Programming

Most of functional programming languages today are based on lambda calculus which was originally developed to achieve a clearer approach to the foundations of mathematics. In the late 1950s, LISP was created introducing many features now found in functional language. Scheme and Dylan were later developed to simplify and improve LISP.

John Backus defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs. Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style which has come to be associated with functional programming.

Most programming languages use right-to-left or left-to-right approach of coding. For example:

while( x <100)

the compiler starts with while and expect parameter to the right of it and read them from left-to-right.

c = a + b;

The compiler reads in this case from right to left.

LISP works from top to the bottom going back and forth through the main function creating an invert tree shape.

Here is an example of function using LISP:

(setq a read)

Functionnalprogramming

Often, LISP is used for scripting in video games and particularly the dialogue scripting. Those are when the user gets to a situation where he can choose different answers to a question.

The figure below shows how LISP deals with a quiz. As you may think, the answer branch can have many nodes representing different answers and each of them having a new set of questions and answers.

LISPQuiz

Functional programming is considered to be a programming paradigm, which means it compares with a methodology. It is all about expressions (a collection of operations and variables that results in a single value) being evaluated. There are no state and mutable data emphasizing on functions. This is opposed to imperative programming, which emphasizes changes in state resulting in different values at different times. In functional code, the output value of a function depends only on the arguments that are input to the function, so for a function f with the same arguments, even called many times at different times, the same result is obtained, output is then easier to predict. The functions are first-class meaning that it is possible to define and manipulate functions from within other functions, used as objects and passed around the program just like variables which is the key to functional programming.

Functional programming languages includes mainly Scheme, Erlang, Objective Caml and Haskell for industrial and commercial applications but also finds use in domain-specific programming languages such as R, Mathematica, J and K, F# and XQuery XML. Other widely used languages incorporate functional programming such as Javascript or Perl.

Functional programming provides a better support for structured programming. When it is necessary to develop abstractions to structure a program and create components using those abstractions to interface each other. Functional languages make the creation of simple abstractions easy when it is necessary to create those in order to link the different structures of a program.

In simple words functional programming allows shorter and expressive programs, program correctness and then a better productivity.

Pattern recognition and Machine Learning

In the different AI programs, machine learning helps in carrying out pattern recognition. One of the examples is statistical data mining. In the process of machine learning, a computer is provided instructions as to how a particular task should be carried out. The process of machine learning is implemented in two different ways i.e. through supervised and unsupervised learning:

Supervised Learning:In supervised learning, the computer to be taught is provided with pattern recognition algorithms. Different examples about how to complete a particular task are presented to the computer. These examples show how the process of completing a task is executed. It also gives information about the product. Throughout the process of training/teaching the computer, feedback is also provided.

Unsupervised Learning: In unsupervised learning, the computer doesn't get any feedback or guidance while learning. No guidelines are provided either. It means that unlike supervised learning, patterns are not labelled or classified beforehand. The process of classifying information created by the artificial intelligence program thus, needs to be very efficient.

Other Applications of Pattern Recognition
Applications such as computer-aided diagnosis (CAD) makes use of pattern recognition software. Other applications include classifying a particular text in different categories like speech recognition, recognizing handwriting, industrial inspection, person identification, classification of text into categories (e.g. spam/non-spam email messages), the automatic recognition of postal codes on postal envelopes etc.

Within medical science, pattern recognition is the basis for computer-aided diagnosis (CAD) systems supporting the doctor's interpretations and findings.The method of signing one's name was captured with stylus and overlay starting 1990. The strokes, speed, relative min, relative max, acceleration and pressure is used to uniquely identify and confirm identity.

Microsoft Kinect also offers the possibilities to the user to be recognized and then signed in automatically. The device takes pictures of the user face and then analyzes the features using vectors. When next time, the user face is taken again, the process is repeated and compared to the original picture. If the comparison value is above the minimum threshold, the device opens the corresponding user account. This can also be done with voice recognition using similar process, only instead of using vector comparison; Kinect uses the pitch and frequencies of the sinusoidal signal.

Image Analysis:
The use of pattern recognition in image analysis proves to be of great help. One of the important image analysis tools used by computers is the neural networks. The neural network and other tools like edge detectors, which are based on the model of human visual perception, can be used in the process of image analysis.

Pattern Recognition Tests:
The different types of pattern recognition tests can be used in measuring the aptitude of a person. One gets an indication of IQ with such tests. The questions presented in such tests require us to recognize the pattern hidden in the given design, set of numbers, etc.

One Response to "AI in applications"

Leave a Reply

%d bloggers like this: