Foundations of Real World
Intelligenceedited by Yoshinori Uesaka, Pentti Kanerva, and Hideki Asoh
In 1992 Japan's Ministry of International Trade and Industry (MITI) began a research program in Real World Computing, as a successor to the Fifth Generation Computing program of the previous decade, complementing the fifthgeneration approach. Its objective is to lay a foundation and to pursue the technical realization of humanlike flexible and intelligent information processing. This book collects results of ten years of original research by six research laboratories, three Japanese and three European, whose research focus has been the theoretical and algorithmic foundations of intelligence as manifested in the real world an in our dealing with it.
Realworld intelligent systems handle complex, uncertain, dynamic, multimodal information in real time. Both explicit and implicit information are important. Hence we need to develop a novel integrated framework of representing knowledge and making inferences based in it. It is impossible to preprogram all the knowledge needed for coping with the variety and complexity of real environments, and therefore learning and adaptation are keys to intelligence. Learning is a kind of metaprogramming strategy. Instead of writing programs for specific tasks, we must write programs that modify themselves based on a system's interaction with its environment.
The book includes chapters on Inference and learning with graphical models, Approximate reasoning, Evolutionary computation and beyond, Methodology of distributed and active learning, Symbol pattern integration using multilinear functions, and Computing with large random patterns. The treatment is mathematically rigorous, yet accessible, and the discussion of issues is of general interest to an educated reader at large. The book provides excellent reading for graduate courses in Computer Science, Cognitive Science, Artificial Intelligence, and Applied Statistics.
Read an excerpt from this book.
Yoshinori Uesaka is a professor at Science University of Tokyo. Pentti Kanerva is a senior researcher at Swedish Institute of Computer Science. Hideki Asoh is a senior researcher at Electrotechnical Laboratory in Tsukuba City, Japan.
 Preface
 General Introduction
RWI Research Center, Electrotechnical Laboratory
 1 RealWorld Intelligence and the RealWorld Computing Program
Nobuyuki Otsu
 1.1 Outline of the RWC Program
 1.2 RealWorld Intelligence
 1.3 Concluding Remarks
 2 Theoretical and Algorithmic Foundations of RealWorld Intelligence
Hideki Asoh
 2.1 Objective
 2.2 Approach
 2.3 Research Issues
 2.4 Organizations of R&D and This Book
 2.5 Concluding Remarks
 References
 I Inference and Learning with Graphical Models
RWI Research Center, Electrotechnical Laboratory
 3 An Overview of Theoretical Foundation Research in RWI Research Center
Hideki Asoh, Kazuhisa Niki, Koiti Hasida, Shotaro Akaho, Masaru Tanaka, Yoichi Motomura, Tatsuya Niwa, and Kenji Fukumizu
 3.1 Models and Algorithms
 3.2 Frameworks of Learning
 4 BAYONET: Bayesian Network on Neural Network
Yoichi Motomura
 4.1 Bayesian Networks Based on Neural Networks
 4.2 Implementation
 4.3 Application
 4.4 Conclusion
 5 Multivariate Information Analysis
Kazhisa Niki, Junpei Hatou, Toshiaki Kawamata, and Ikou Tahara
 5.1 Expression of Multivariate Analysis
 5.2 Simulation
 5.3 Structure Analyses of fMRI Data
 5.4 Extended Functional Connectivity Analysis
 5.5 Conclusion
 6 Dialoguebased Map Learning in an Office Robot
Hideki Asoh, Yoichi Motomura, Toshihiro Matsui, Satoru Hayamizu, and Isao Hara
 6.1 Dialoguebased Map Acquisition
 6.2 System and Experiment
 6.3 Discussion
 6.4 Related Work
 6.5 Conclusion and Future Work
 7 Conclusion
 References
 II Approximate and Reasoning: RealWorld Applications of Graphical Models
RWC Theoretical Foundation SNN Laboratory
Bert Kappen, Stan Gielen, Wim Wiegerinck, Ali Taylan Cemgil, Tom Heskes, Marcel Nijman, and Martijn Leisink
 8 Mean Field Approximations
 8.1 Mean Field Approximation with Structure
 8.2 Boltzmann Machine Learning Using Mean Field Theory and Linear Response Correction
 8.3 Secondorder Approximations for Probability Models
 8.4 Discussion
 9 Medical Diagnosis
 9.1 Probabilistic Modeling in the Medical Domain
 9.2 Promedas, a Demonstrations DSS
 9.3 Discussion
 10 Automatic Music Transcription
 10.1 Dynamical Systems and the Kalman Filter
 10.2 Tempogram Representation
 10.3 Model Training
 10.4 Evaluation
 10.5 Discussion and Conclusions
 III Evolutionary Computation and Beyond
RWC Theoretical Foundation GMD Laboratory
Heinz Mühlenbein and Thilo Mahnig
 11 Analysis of the Simple Genetic Algorithm
 11.1 Definitiosn
 11.2 Proportionate Selection
 11.3 Recombinaton
 11.4 Selection and Recombination
 11.5 Schema Analysis Demystified
 12 The Univariate Marginal Distribution Algorithm (UMDA)
 12.1 Definitiosn of UMDA
 12.2 Computing the Average Fitness
 13 The Science of Breeding
 13.1 Single Trait Theory
 13.2 Tournament Selection
 13.3 Analytical Results for Linear Functions
 13.4 Numerical Results for UMDA
 13.5 Royal Road Function
 13.6 Multimodal Functions Suited for UMDA Optimization
 13.7 Deceptive Functions
 13.8 Numerical Investigations of the Science of Breeding
 14 Graphical Models and Optimization
 14.1 Bolzmann Selection and Convergence
 14.2 Factorization of the Distribution and the FDA
 14.3 A New Annealing Schedule Schedule for the Boltzmann Distribution
 14.4 Finite Populations
 14.5 Population Size, Mutations, and Bayesian Prior
 14.6 Constraint Optimization Problems
 15 Computing a Bayesian Network from Data
 15.1 LFDA—Learning a Bayesian Factorization
 15.2 Optimization, Dependencies, and Search Distributions
 16 System Dynamics Approach to Optimization
 16.1 The Replicator Equation
 16.2 Boltzmann Selection and the Replicator Equation
 16.3 Some System Dynamics Equations for Optimization
 16.4 Optimization of Binary Functions
 17 Three Royal Roads to Optimization
 18 Conclusion and Outlook
 References
 IV Distributed and Active Learning
RWC Theoretical Foundation NEC Laboratory
 19 Distributed Cooperative Bayesian Learning
Kenji Yamanishi
 19.1 Introduction
 19.2 Plain Model
 20 Learning Special Decision Lists
Atsuyoshi Nakamura
 20.1 Preliminaries
 20.2 Algorithm SLossUpdate
 20.3 Algorithm
 20.4 Algorithm SFixedShareUpdate
 21 The LobPass Problem
Jun'ichi Takeuchi, Naoki Abe, and Shunichi Amari
 21.1 Preliminaries
 21.2 Upper Bounds on the Expected Regret
 21.3 Concluding Remarks
 References
 V Computing with Large Random Patterns
RWC Theoretical Foundation SICS Laboratory
Swedish Institute of Computer Science
 22 Analogy as a Basis of Computation
Pentti Kanerva
 22.1 Computer as a Brain and Brain as a Computer
 22.2 Artificial Neural Nets as Biologically Motivated Models of Computing
 22.3 Description vs. Explanation
 22.4 The Brain as a Computer for Modeling the World, and Our Model of the Brain's Computing
 22.5 Pattern Space Representations
 22.6 Simple Analogical Retreival
 22.7 Learning from Examples
 22.8 Toward a New Model of Computing
 23 The Sparchunk Code: A Method to Building Higherlevel Structures in a Sparsely Encoded SDM
Gunnar Sjödin
 23.1 Encoding HigherLevel Concepts
 23.2 The SDM Model
 23.3 Nonscaling for a Constant Error Probability ∈
 23.4 Sparse Coding
 23.5 The Sparchunk Code
 23.6 Cleanup of the Sparchunk Code
 23.7 Summary
 23.8 Appendix
 24 Some Results on Activation and Scaling of Sparse Distributed Memory
Jan Kristoferson
 24.1 Different Activation Probabilities for Writing and Reading?
 24.2 Scaling Up the Memory
 25 A Fast Activation Mechanism for the Kanerva SDM Memory
Roland Karlsson
 25.1 The Jaeckel SelectedCoordinate Design
 25.2 The New SelectedCoordinate Design
 25.3 Results
 25.4 Conclusion
 26 From Words to Understanding
Jussi Karlgren and Magnus Sahlgren
 26.1 The Meaning of ‘Meaning’
 26.2 A Case in Point: Information Access
 26.3 Words as Content Indicators
 26.4 Latent Semantic Analysis
 26.5 Random Indexing
 26.6 What Is Text, from the Perspective of Linguistics
 26.7 The TOEFLTest
 26.8 Expeimental SetUp
 26.9 Results and Analysis
 26.10 Some Cognititive Implications
 26.11 Implications for Information Access
 26.12 Meaning in Text
 References
 Index
9/1/2001
ISBN (Paperback): 1575863383 (9781575863382)
ISBN (Cloth): 1575863391 (9781575863399)
ISBN (Electronic): 1684000165 (9781684000166)
Subject: Artificial intelligence; Neural networks; Evolutionary programming

Distributed by the University of Chicago Press
