The Power of Machine Intelligence for the Business Enterprise

Dear Reader,

This blog contains material from a presentation I made to friends and colleagues earlier this year on the fast emerging trend of employing Machine Learning (ML) to address enterprise business issues. We have embarked today upon the threshold of an ML wave that is closely following the recent Big Data Analytics wave. 

When I submitted my doctoral dissertation in Machine Learning 18 years ago, I recall discussions we'd have in seminars and workshops that the world was another 35 years away from these techniques making their way into the mainstream, owing to the enormous computational costs of running machine learning algorithms. At that time, only supercomputers could process machine learning programs in a reasonable time frame. That forecast has now been realised in half the time-frame that we'd estimated. In other words the speed at which computer technology is advancing in this era, is truly awesome!

The purpose of this blog is to

i) Provide a quick overview of the origin and evolution of ML
ii) Explain what the various terms and disciplines mean
iii) Discuss latest achievements and trends, including the current investment landscape
iv) Indicate what kind of future systems are imminent

I hope you find the material useful.


Tirthankar RayChaudhuri,

www.turingpointservices.com

Sydney
October 2015

---------------------------------------------------------------------------------------------------------------






For decades researchers have worked on developing theories and techniques to create computing systems that can emulate and on occasion outperform the human brain.



THE IMITATION GAME: CAN MACHINES THINK?

The definition of how a machine can be classified as ‘intelligent’ was the work of Alan Turing in his 1950 paper “Computing Machinery and Intelligence," while working at The University of Manchester (Turing, 1950; p. 460). It is called the Turing Test.



It opens with the words: "I propose to consider the question, 'Can machines think?'"Because "thinking" is difficult to define, Turing chooses to "replace the question by another,which is closely related to it and is expressed in relatively unambiguous words."

Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"

This question, Turing believed, is one that can actually be answered.

In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".


The Turing test is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.


Alan Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses.The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another.


The conversation would be limited to a text-only channel such as a computer keyboard and screen so that the result would not be dependent on the machine's ability to render words as speech.



If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test.

The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give.

ARTIFICIAL INTELLIGENCE


Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also the name of the academic field of study which studies how to create computers and computer software that are capable of intelligent behaviour.


Major AI researchers and textbooks define this field as "the study and design of intelligent agents", in which an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.


John McCarthy and Marvin Minsky who coined the term at MIT in 1955, defined it as"the science and engineering of making intelligent machines".

The Dartmouth Summer Research Project on Artificial Intelligence was the name of a 1956 undertaking now considered the seminal event for artificial intelligence as a field. Organised by John McCarthy (then at Dartmouth College) and formally proposed by McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon, the proposal is credited with formally introducing the term 'artificial intelligence'.



As a consequence of these endeavours, the following popular disciplines have emerged today.


Cognitive science,

•Machine learning,

•Deep learning and

•Natural language processing



Each one of these has had significant advanced contributions from numerous contributors.

In the next sections these disciplines are described in more detail.

COGNITIVE SCIENCE

Cognitive Science (as per the shell diagram below) is an interdisciplinary study of the human mind with an emphasis on acquisition and processing of knowledge and information.



The contributing disciplines to Cognitive Science are described in the next sections.

Artificial Intelligence (Conventional): Expert Systems.

An expert system is a computer system that emulates the decision-making ability of a
human expert.

Expert systems are designed to solve complex problems by reasoning about knowledge,represented primarily as if-then rules rather than through conventional procedural code.

The first expert systems were created in the 1970s and then proliferated in the 1980s.

Expert systems were among the first truly successful forms of AI software.

An expert system is divided into two sub-systems:

The Inference Engine. The inference engine applies the rules to the known
facts to deduce new facts. Inference engines can also include explanation and debugging capabilities

The Knowledge Base. The knowledge base represents facts and rules.


Linguistics is the scientific study of language.

One of the main objectives of Linguistics is to elucidate the mental and
physical processes underlying the perception and production of language.



Noam Chomsky described as the father of modern linguistics, considers
language as an essential mental faculty, universally endowed to all human
beings.

He postulated that linguistic structures are partly innate, reflecting an
underlying similarity, a Universal Grammar in all human languages.

 There are three major aspects of linguistic study:


1.language form (grammar, morphology, phonetics),


2.language meaning (semantics) and


3.language in context (pragmatics).


Neuroscience is the scientific study of the nervous system. The scope of neuroscience has broadened to include different approaches used to study the molecular,cellular, developmental, structural, functional, evolutionary, computational, and medical aspects of the nervous system.

     
Neurons (or nerve cells) are one of the main constituents of the brain. The brain’s neo cortex is involved in higher functions such as sensory perception, generation of
motor commands, spatial reasoning, conscious thought and language. Recent
theoretical advances in neuroscience have also been aided by the study of neural
networks (sub symbolic models of the neo cortex).


Cognitive psychology is the study of mental processes such as attention,language use, memory, perception, problem solving, creativity and thinking.


Much of the work derived from cognitive psychology has been integrated into various other modern disciplines of psychological study, including educational psychology, social psychology, personality psychology, abnormal psychology,developmental psychology, and consumer psychology.


Philosophy is the study of general and fundamental problems, such as those connected with reality, existence, knowledge, values, reason, mind and language.

The ancient Greek word φιλοσοφία (philosophia) was probably coined by Pythagoras and literally means "love of wisdom" or "friend of wisdom."


Philosophy has been divided into many sub-fields. It has been divided

•chronologically (e.g., ancient and modern);

•by topic (the major topics being epistemology, logic, metaphysics, ethics, and aesthetics);

and

•by style (e.g., analytic philosophy).

MACHINE LEARNING


Machine Learning is a field of computer science.

It evolved from the study of pattern recognition and computational learning theory.
Machine Learning explores the construction and study of algorithms that can learn from data and make predictions. Common examples of Machine Learning
applications are spam filtering, optical character recognition (OCR), search engines and computer vision.

In business contexts Machine Learning methods may be referred to as Predictive Analytics.



Examples of Machine Learning algorithms are

1.Decision tree learning
2.Association rule learning
3.Artificial neural networks
4.Inductive logic programming
5.Support vector machines
6.Clustering
7.Bayesian networks                                        
8.Reinforcement learning
9.Representation learning
10.Similarity and metric learning
11.Sparse dictionary learning
12.Genetic algorithms

A longer list of Machine Learning applications are as follows




A list of highly advanced and in some cases highly intimidating physical automations (refer also to the last section on Risks/Challenges) that employ Machine Learning in varying degrees is given below


•Self-driven vehicles
•Robots in general
•Unmanned aircraft (drones)
•Smart weapons, eg, guided missiles
•Guided rockets
•Unmanned spacecraft
•Artificial satellites (eg, for communication and weather data)





DEEP LEARNING


Deep learning (deep machine learning, or deep structured learning, or hierarchical learning,
or sometimes DL) is formally defined as a branch of Machine Learning based on a set of algorithms that attempt to model high-level abstractions in data by using model architectures, with complex structures or otherwise, composed of multiple non-linear transformations.


More informally Deep Learning is merely a buzzword, or a rebranding of Artificial Neural Networks with multiple ‘hidden layers’.

The term "deep learning" gained traction in the mid-2000s after a publication by
Geoffrey Hinton and Ruslan Salakhutdinov showed how a many-layered feed forward
neural network could be effectively pre-trained one layer at a time, treating each layer
in turn as an unsupervised restricted Boltzmann machine, then using supervised back propagation for fine-tuning.


          The real impact of deep learning in industry started in large-scale speech recognition around  
          2010 following a project conducted at Microsoft Research by Geoff Hinton and Li Deng.

       
          In March 2013, Geoff Hinton and two of his graduate students, Alex Krizhevsky and Ilya                    Sutskever, were hired by Google.

          Their work is focused on both improving existing machine learning products at Google and
          also to help in dealing with the growing amount of data that Google has.

          Google also purchased Hinton's company, DNNresearch.



           Towards the end of 2013, Yann LeCun of New York University, another Deep
           Learning guru, was appointed head of the newly-created AI Research Lab at Facebook.
         
           In 2014, Microsoft established The Deep Learning Technology Center in its MSR division,                  amassing deep learning experts for application-focused activities.

NATURAL LANGUAGE PROCESSING

Natural Language Processing (or NLP) involves

•Natural language understanding, that is, enabling computers to derive meaning from human or natural language input, and also

•Natural language generation.

Up to the 1980s, most NLP systems were based on complex sets of hand-written rules.

Modern NLP algorithms are based on machine learning, especially statistical machine learning.

NLP is closely related to Computational Linguistics, a field concerned with the statistical or
rule-based modelling of natural language from a computational perspective.

A recent powerful application of NLP is semantic search (improving search accuracy by
understanding the searcher’s intent and context to generate more relevant results) by Google.

Geoff Hinton’s deep learning techniques are being employed to solve NLP problems.

WHY MACHINE INTELLIGENCE TODAY IS SUCH A ‘HOT’ TOPIC

While researchers have been working on these disciplines for over 5 decades, there existed earlier a major roadblock to migrating these techniques from the Research Lab to the Business Enterprise.

It was the enormous computational expense involved.

Today this is no longer an issue.

The good news is that enterprise servers today have the computing power and capability of
yesterday's supercomputers!

Servers, processors, processing memory, network and storage capabilities and speeds today are hundreds of times greater than they used to be.

As a consequence highly complex computing algorithms/data processing involving enormous volumes of data can be run in a few minutes, even seconds, instead of days and sometimes weeks.

Business automation has come a very long way since the early days of calculators and vending
machines.

In today's Business Enterprise world ERP, CRM, OLAP and BI are commonplace terms.

These are multi-tiered complex ‘traditional’ software systems that run on distributed computer infrastructures and exhibit high-performance in terms of speed and user-loads. Such systems may include advanced logical reasoning and some analytics, but do not have machine intelligence features yet.

However the Business Enterprise is now migrating at a fast pace towards the new ‘digital’ world
of Cloud, Mobile Apps, Social Media and Big Data Analytics.

With the plethora of data (big data) available on these digital platforms, leading organizations such as
Google and Facebook are hiring the gurus of Machine Learning to make their platforms and offerings smarter.

Thus the next generation of ‘Enterprise Machine Intelligence Systems’ is imminent!

RECENT ADVANCEMENTS

Recent machine intelligence enterprise-level developments/achievements are the outcome of
varied and multi-disciplinary integrated contributions from various traditional areas of study.

Some of these are presented as follows

Numenta is a machine intelligence company that has developed a cohesive theory, core
software, technology and applications based on the principles of the brain’s neo cortex.

It is essentially a set of tools and a runtime engine, including embedded learning algorithms, that enables self-training and pattern recognition based on the theories of hierarchical temporal memory.

Watson is an artificially intelligent ‘cognitive computer’ system capable of answering questions posed in natural language. Watson is a question answering (QA) computing system that IBM has built to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies to the field of open domain question answering. It has been applied to lung cancer treatment at the Memorial Sloan-Kettering Cancer Center.

Microsoft Azure Machine Learning (ML) Studio is a service that a developer can use to build predictive analytics models (using training datasets from a variety of data sources) and then easily deploy those models for consumption as cloud web services. Azure ML Studio provides rich functionality to support many end-to-end workflow scenarios for constructing predictive models, from easy access to common data sources, rich data exploration and visualization tools, application of popular ML algorithms, and powerful model evaluation, experimentation, and web publication tooling.

BayesiaLab is a universal analytics platform (available for Windows/Mac/Unix), which provides scientists a comprehensive “lab” environment for machine learning, knowledge modelling, diagnosis, analysis, simulation, and optimization - all based on the Bayesian network formalism, a probabilistic graphical model.

Google Now Speech Recognition Technology using Deep Learning. Google claim that this technology now has a mere 8% error rate in recognizing spoken words.Their neural networks are currently over 30 layers deep.



 FUTURE ENTERPRISE MACHINE INTELLIGENCE SYSTEMS


What kind of human reasoning processes can they emulate and improve upon?

Today Machine Intelligence can emulate any kind of logical human reasoning process based upon

•information and
•learning from available information, ie, data

Owing to the high processing power, I/O speeds, high network bandwidth and enormous data storage capabilities of modern enterprise infrastructure, Enterprise Machine Intelligence Software of the future will automatically process and analyse ‘big data’ information hundreds of times faster than the average human intellect.

What range and kind of business enterprise problems can they resolve?

A wide range of business enterprise problems can be solved effectively by Machine Intelligence,
eg, customer behaviour/response predictions, complex financial/market analysis and predictions, medical diagnosis, capacity management of engineering systems/services, complex task scheduling/rosters, predicting mining prospects, security analysis, macro-economic predictions and so on..




What kind of future systems are we seeking to build?

A few upcoming Enterprise Machine Intelligence Systems are listed below
•ERP, CRM, OLAP and BI systems with Machine Learning and Prediction capabilities
•Business Analytics systems based on Machine Learning Algorithms
•Advanced Speech Recognition and NLP Interfaces
•Intelligent Personal Agents on Smart hand-held devices

However the list of potential future developments is clearly without limit.


THE INVESTMENT LANDSCAPE


The Investment Landscape today for AI start ups looks very promising as per the following diagrams.



RISKS / CHALLENGES                                       

R

Are there risks/challenges in building Enterprise Machine Intelligence Software?

A number of pessimists (including eminent scientist Stephen Hawking) have warned that ‘recreating the power and creativity of the human mind in software’ is fraught with risk and ‘could see the human race become extinct’.

However Machine Intelligence at this point of time is merely the deployment of advanced reasoning and search methods which present day enterprise servers have the grunt to support.

At this point in time we are nowhere near "recreating the power and creativity of the human mind in software" as these persons appear to think.

Nonetheless it is worthwhile setting up some regulatory guidelines for this emerging technology.

----------------------------------------------------------------------------------------------------------------------

Power of M
R

achine Intelligence for the Business Enterprise

1 comment:

  1. It is an important concept about analysis and also the way to express it is easily understandable, so thanks for all. Our Bayesian network technique is beneficial since it quickly delineates the most related characteristics with the illness type.

    Involvement of Bayesian network models in predicting various types of hematological malignancies
    Data Analysis Services


    ReplyDelete