35th international workshop on bayesian inference and maximum … · 2015-07-20 · 35th...

56
35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering Advisory Committee G. L. Bretthorst Washington Univ., USA A. Caticha University at Albany (SUNY), USA J. Center Autonomous Exploration, USA G. Erickson Boise State Univ., USA R. Fischer IPP, Germany P. M.Goggans University of Mississippi, USA K. H. Knuth University at Albany (SUNY), USA R. Niven UNSW, Canberra, Australia A. Mohammad-Djafari LSS-CNRS, France C. Rodriguez University at Albany (SUNY), USA J. Skilling Cambridge, UK U. Toussaint IPP, Germany Local Organizers Adom Giffin Clarkson University Kushani De silva Clarkson University Kevin H. Knuth University at Albany (SUNY) MaxEnt 2015 is sponsored by E.T. Jaynes Foundation MaxEnt Workshops Inc. Clarkson University Entropy July 19-24, 2015 http://www.clarkson.edu/inference/MaxEnt2015/

Upload: others

Post on 20-May-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

35th International Workshop on

Bayesian Inference and Maximum Entropy Methods in

Science and Engineering

Advisory Committee

G. L. Bretthorst Washington Univ., USA

A. Caticha University at Albany (SUNY), USA

J. Center Autonomous Exploration, USA

G. Erickson Boise State Univ., USA

R. Fischer IPP, Germany

P. M.Goggans University of Mississippi, USA

K. H. Knuth University at Albany (SUNY), USA

R. Niven UNSW, Canberra, Australia

A. Mohammad-Djafari LSS-CNRS, France

C. Rodriguez University at Albany (SUNY), USA

J. Skilling Cambridge, UK

U. Toussaint IPP, Germany

Local Organizers

Adom Giffin Clarkson University

Kushani De silva Clarkson University

Kevin H. Knuth University at Albany (SUNY)

MaxEnt 2015 is sponsored by

E.T. Jaynes Foundation

MaxEnt Workshops Inc.

Clarkson University

Entropy

July 19-24, 2015

http://www.clarkson.edu/inference/MaxEnt2015/

Page 2: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Tutorials Sunday July 19, 2015

Sunday Morning 8:30 – 9:30 Registration

9:30 – 10:15 Adom Giffin Tutorial 1, Part 1: Bayesian Probability Theory

10:15 – 10:30 Break

10:30 – 11:15 Adom Giffin Tutorial 1, Part 2: Maximum Entropy

11:15 – 11:45 Break

11:45 – 12:30 Kevin H. Knuth Tutorial 2, Part 1: Bayesian Model Testing

12:30 – 12:45 Break

12:45 – 1:30 Kevin H. Knuth Tutorial 2, Part 2: Bayesian Model Testing

Lunch 1:30 – 2:30

Sunday Afternoon 2:30 – 3:15 Udo von Toussaint Tutorial 3, Part 1: Numerical Methods for Bayesian Computation

3:15 – 3:30 Break

3:30 – 4:15 Udo von Toussaint Tutorial 3, Part 2: Numerical Methods for Bayesian Computation

4:15 – 6:00 Break

Reception 6:00 – 9:00

Page 3: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Monday July 20, 2015 Monday Morning 8:30 – 9:15 Registration

9:15 – 9:45 Welcome

9:45 – 10:45 Invited Speaker: Nestor Caticha Source localization by Entropic Inference and Backward Renormalization

Group priors

10:45 – 11:30 Break

11:30 – 12:00 Yannis Kalaidzidis New Bayesian Foreground/Background Discrimination (BFBD) Algorithm for Denoising Images of Fluorescent Microscopy

12:00 – 12:30 Fred Daum Bayesian Big Bang

Lunch 12:30 – 2:15

Monday Afternoon 2:15 – 3:15 Invited Speaker: Deniz Gencaga A Brief Look at Modeling and Learning Dynamical Systems

3:15 – 4:00 Break

4:00 – 4:30 Kevin H. Knuth Why Square Roots of Probabilities?

4:30 – 5:00 James Lyons Walsh An Information Physics Derivation of Geodesic Equations from the Influence Network

Page 4: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Tuesday July 21, 2015 Tuesday Morning 8:30 – 9:15 Registration

9:15 – 10:15 Invited Speaker: Purushottam Dixit Maximum Path Entropy Random Walks

10:15 – 10:45 Sergio Davis Applications of the Divergence Theorem in Bayesian Inference and MaxEnt

10:45 – 11:30 Break

11:30 – 12:00 Keith Earle Parameter Estimation of Saturated Magnetic Resonance Spectra

12:00 – 12:30 Diego González Liouville's Theorem from the Principle of Maximum Caliber in Phase Space

Lunch 12:30 – 2:15

Tuesday Afternoon 2:15 – 3:15 Invited Speaker: Ariel Caticha Entropic Dynamics

3:15 – 4:00 Break

4:00 – 4:30 Barrie Stokes Equidistribution Testing with the ECT

4:30 – 5:00 Steven H. Waldrip Maximum Entropy Derivation of Quasi-Newton Methods

5:00 – 6:00 Break

Poster Session 6:00 – 9:00

Page 5: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Tuesday July 21, 2015 Poster Session 6:00 – 9:00 Abedi, Mohammad

Entropic Dynamics on Curved Spaces

Bartolomeo, Daniel M. Entropic Dynamics: the Schrodinger Equation and its Bohmian Limit

Caticha, Nestor Sympatric Multiculturalism or How Distrust Polarizes Societies of Bayesian Agents into Groups

Gai, Anthony D. Bayesian Model Testing of Models for Ellipsoidal Variation on Stars Due to Hot Jupiters

Kim, Sunil L. Describing Simple States of Knowledge with Exact Probabilities

Mubeen, Muhammad Asim Bayesian Regularization of Diffusion Tensor Maps Using Gibbs Prior on Riemannian Manifold

Nawaz, Shahid Entropic Dynamics of Spin-Half Particles

Placek, Ben The Detection and Characterization of Exoplanets with EXONEST

Udagedara, Indika Gayani Kumari Reduced Order Modeling for Monte Carlo Simulations in Radiation Transport

Ximenez de Azevedo Neto, Silvio On Recent Information Theory Applications in Finance

Page 6: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Wednesday July 22, 2015 Wednesday Morning 8:30 – 9:15 Registration

9:15 – 10:15 Invited Speaker: Renaldus Urniezius Probabilistic Convex Programming for Semi-Globally Optimal Resource

Allocation

10:15 – 10:45 Ariel Caticha Geometry from Information Geometry

10:45 – 11:30 Break

11:30 – 12:00 H. H. Takada On Execution Strategies and Minimum Discrimination Information Principle

12:00 – 12:30 H. H. Takada Intraday Trading Volume and Non-Negative Matrix Factorization

Lunch 12:30 – 2:15

Wednesday Afternoon 2:15 – 2:45 H.R.N. van Erp A Bayesian Decision Theory, PART I

2:45 – 3:15 H.R.N. van Erp A Bayesian Decision Theory, PART II

3:15 – 4:00 Break

4:00 – 4:30 R. Wesley Henderson A Simple Approach to Parallelizing Nested Sampling

4:30 – 5:00 Udo von Toussaint Investigations on Bayesian Uncertainty Quantification with Two Examples

Page 7: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Thursday July 23, 2015 Thursday Morning 8:30 – 9:15 Registration

9:15 – 10:15 Invited Speaker: Robert Niven Maximum Entropy Analysis of Flow Networks with Graphical Constraints

10:15 – 10:45 Robert Niven Bayesian Cyclic Networks, Mutual Information and Reduced-Order Bayesian Inference

10:45 – 11:30 Break

11:30 – 12:00 Ali Kahirdeh Acoustic Emission Entropy as a Measure of Damage in Materials

12:00 – 12:30 Luke K. Rumbaugh Blind Signal Separation for Underwater LIDAR Applications

Lunch 12:30 – 2:15

Thursday Afternoon 2:15 – 2:45 J. Landes Objective Bayesian Nets for Consistent Datasets

2:45 – 3:15 Selman Ipek Relational Entropic Dynamics of Many Particles

3:15 – 5:00 Break

Excursion/Banquet 5:00 – 9:00

Page 8: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Friday July 24, 2015

Friday Morning

8:30 – 9:15 Registration

9:15 – 10:15 Humberto Loguercio

Functional Identities in Superstatistics

10:15 – 10:45 Sima Sharifirad

Improved Gini Index and Enhanced SMOTE by Maximum Entropy

Lunch 12:30 – 2:15

Business Meeting 2:15 – 3:15

Page 9: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

ABSTRACTS

Page 10: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Monday July 20, 2015 Monday Morning 8:30 – 9:15 Registration

9:15 – 9:45 Welcome

9:45 – 10:45 Invited Speaker: Nestor Caticha Source localization by Entropic Inference and Backward Renormalization

Group priors

10:45 – 11:30 Break

11:30 – 12:00 Yannis Kalaidzidis New Bayesian Foreground/Background Discrimination (BFBD) Algorithm for Denoising Images of Fluorescent Microscopy

12:00 – 12:30 Fred Daum Bayesian Big Bang

Lunch 12:30 – 2:15

Monday Afternoon 2:15 – 3:15 Invited Speaker: Deniz Gencaga A Brief Look at Modeling and Learning Dynamical Systems

3:15 – 4:00 Break

4:00 – 4:30 Kevin H. Knuth Why Square Roots of Probabilities?

4:30 – 5:00 James Lyons Walsh An Information Physics Derivation of Geodesic Equations from the Influence Network

Page 11: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Source localization by Entropic Inference andBackward Renormalization Group priors

Nestor CatichaInstituto de Fısica

Universidade de Sao Paulo, Brazil

Abstract

A systematic method of transferring information from coarser to finer resolutionbased on Renormalization Group (RG) transformations is introduced. It permitsbuilding informative priors in finer scales from posteriors in coarser scales since,under some conditions, RG transformations in the space of hyperparameters canbe inverted. These priors are updated using renormalized data into posteriors byMaximum Entropy. The resulting inference method, Backward RG (BRG) priors,is tested by doing simulations of a functional Magnetic Resonance imaging (fMRI)experiment. Its results are compared with a Bayesian approach working in the finestavailable resolution. Using BRpriors sources can be partially identified even whensignal to noise ratio levels are up to ∼ −25dB improving vastly on the single stepBayesian approach. For low levels of noise the BRprior is not an improvement overthe single scale Bayesian method. Analysis of the histograms of hyperparameterscan show how to distinguish if the method is failing, due to very high levels of noise,or if the identification of the sources is, at least partially possible.

Figure 1: Magnetization or expected value of activity for σ = 15. (a,c,e,f) Pure Bayes,(b,d,g,h) BRpriors. Left: recovered images, right: histograms of signal (blue-dashed) andsignal (grey - bars).

Key Words: Priors, Entropic Inference, Renormalization Group, Inverse problem

Page 12: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

NEW BAYESIAN FOREGROUND/BACKGROUND DISCRIMINATION (BFBD) ALGORITHM FOR DE-

NOISING IMAGES OF FLUORESCENT MICROSCOPY

Yannis Kalaidzidis1,2, Hernán Morales-Navarrete1, Marino Zerial1 (1) Max Planck Institute Molecular Cell Bilogy and Genetics, Dresden,

Germany (2) Faculty Bioengineering and Bioinformatics, Moscow State

University, Moscow, Russia ([email protected])

Abstract A major problem for the image analysis of live cells and thick tissue sections is the high noise level. In live cell imaging (2D+t), the possible intensity of illumination is limited to the level at which the cells would not suffer from phototoxicity. The 3D images of a thick tissue have high background especially in case of fluorescent markers that yield high level of diffuse staining (e.g. actin staining throughout the cytoplasm). Both effects lead to a low signal-to-noise ratio in the images that pose challenges for the quantification. In general, all de-noising algorithms offer a trade-off between noise suppression and microscopic structure blurring. The class of edge-preserving algorithms like median filtering and anisotropic diffusion was developed to decrease the blurring effect while keeping the high level of de-noising. Unfortunately, the low contrast and small size of intracellular structures make these algorithms inefficient for de-noising images of intracellular fluorescent microscopy. We took advantage of the statistical anisotropy of (3D and 2D+t) images in third dimension. The high-resolution light microscopes have a point-spread-function that is elongated (2-3 fold) in the Z-direction. This leads to higher correlation of intensities in Z- relative to X-Y direction. In live cell imaging, the sequential frames have higher correlation intensities in time than in the X-Y plane. We use this anisotropy to make an outlier-tolerant Bayesian estimation of the background (low-frequency part of image) and separate it from the foreground. Subsequently, the background estimate and foreground signal are summed to generate a new de-noised image. Our Bayesian foreground/background discrimination (BFBD) algorithm over-performs the standard algorithms such as median filtering, Gauss low-pass filtering and anisotropic diffusion, but also over-perform (by quality and computational performance) other more elaborated ones, such as the ‘PureDenoise’ (Blu & Luisier, 2007, Luisier, 2010, plugin of ImageJ) and ‘Edge preserving de-noising and smoothing’ (Beck & Teboulle, 2009, plugin of Icy). We demonstrated that the BFBD algorithm increases the quality of object recognition in comparison with other de-nosing methods. Key Words: Image de-noising, fluorescent microscopy.

Page 13: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

BAYESIAN BIG BANG

Fred Daum1

(1) Raytheon ([email protected])

Abstract

We show that the flow of particles corresponding to Bayes’ rule has a number of striking

similarities with the big bang, including cosmic inflation and cosmic acceleration. We derive

a PDE for this flow using a log-homotopy from the prior probability density to the posteriori

probability density. We solve this PDE using the gradient of the solution to Poisson’s

equation, which is computed using an exact Green’s function and the standard Monte Carlo

approximation of integrals. The resulting flow is analogous to Coulomb’s law in

electromagnetics. We have used no physics per se to derive this flow, but rather we have only

used Bayes’ rule and the definition of normalized probability and a log-homotopy parameter

that could be interpreted as time. By assuming that both the prior probability density and the

likelihood are Gaussian, we can explicitly compute the solution of our PDE as well as the

corresponding q vs. z curve, which agrees with currently available cosmological data to

within experiment error. Of course, Einstein’s theory of general relativity also agrees with the

cosmological data, and more accurate data are required to falsify such theories. Our theory

has the advantage that it actually explains the big bang, and it avoids postulating the existence

of dark energy and dark matter, neither of which has ever been detected in credible terrestrial

experiments. The so-called “new inflation” theories are in excellent agreement with the latest

cosmic microwave background (CMB) data. On the other hand, as emphasized in [1]-[5]

there are many fundamental questions that remain open, including the lack of a plausible

physical explanation for inflation itself, and a discrepancy between theory vs. data of roughly

120 orders of magnitude in the energy density of the quantum vacuum to explain late cosmic

acceleration. Moreover, physics has no good explanation for dark matter or dark energy or

why the big bang happened at all. Such glaring issues obviously invite new ideas, both

physical and mathematical. In our calculations, we ruthlessly exploit the amazing fact that the

Zel’dovich approximation is exact for the Monge-Ampère gravitational model (Yann Brenier,

2010).

References:

[1] A. Riess et al. Astro. Journal 659:98-121 (2007).

[2] P. J. Peebles & B. Ratra, Reviews of Modern Physics 75, 559 (2003).

[3] B. Santos, et al. arXiv:1009.2733v1 (2013).

[4] P. Steinhardt, Scientific American (2011).

[5] S. Weinberg, Reviews of Modern Physics (1989).

Page 14: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

A BRIEF LOOK AT MODELING AND LEARNINGDYNAMICAL SYSTEMS

Deniz Gencaga1

(1) Carnegie Mellon University, Robotics InstitutePittsburgh PA, 15213, USA

Abstract

Dynamical systems are mathematical ways of modeling physical phenomena viathe changes in their instantaneous descriptions (states) in time. These models areused in navigation, guidance, control, robotics, computer vision, weather-climateforecasting and many other applications. In this work we briefly review how tomodel and learn dynamical systems. We show linear and nonlinear examples aswell as stochastic and chaotic systems. First, we summarize the Kalman filterand its variants [1, 2], including the particle filters. Here, we demonstrate modelswhere the system dynamics is assumed to be known. Secondly, we demonstrate theutilization of particle filters in a source separation application where the underlyingsystem dynamics is partially known. Finally, we demonstrate the behavior of acomplex, chaotic system and show the interactions between its variables [3] usingan information-theoretic approach, namely the Transfer Entropy.

References:[1] S. Sarkka, Bayesian Filtering and Smoothing, (Cambridge Univ. Press, NY,

2013).[2] A. Majda and J. Harlim, Filtering Complex Turbulent Systems, (Cambridge

Univ. Press, NY, 2012).[3] D. Gencaga, K. H. Knuth, W. B. Rossow, Entropy 2015, 17, 438-470.

Key Words: Dynamical systems, Kalman filter, Particle filter, Modeling, Statisticallearning, Chaotic systems, Information theory, Transfer entropy

Page 15: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Why Square Roots of Probabilities?

Kevin H. KnuthDepartments of Physics and Informatics

University at Albany (SUNY), Albany NY, USA

July 18, 2015

Abstract

Square roots of probabilities appear in several contexts, which suggests that theyare somehow more fundamental than probabilities. Square roots of probabilitiesappear in expressions of the Fisher-Rao Metric and the Hellinger-Bhattacharyyadistance. They also come into play in Quantum Mechanics via the Born rule whereprobabilities are found by taking the squared modulus of the quantum amplitude.Why should this be the case and why do these square roots not arise in the variousformulations of probability theory?

In this short, inconclusive exploration, I consider quantifying a logical statementwith a vector defined by a set of components each quantifying one of the atomicstatements defining the hypothesis space. I show that conditional probabilities (bi-valuations), such as P (x|y), can be written as the dot product of the two vectorsquantifying the logical statements x and y each normalized with respect to the vectorquantifying the conditional y. The components of the vectors are proportional tothe square root of the probability. As a result, this formulation is shown to beconsistent with a concept of orthogonality applied to the set of mutually exclusiveatomic statements such that the sum rule is represented as the sum of the squaresof the square roots of probability.

Key Words: probability, orthogonality, information, information geometry, quan-tum mechanics, Fisher-Rao metric, Hellinger-Bhattacharyya distance

Page 16: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

An Information Physics Derivation of GeodesicEquations from the Influence Network

James Lyons Walsh1, Kevin H. Knuth1,2

Departments of Physics1 and Informatics2

University at Albany (SUNY), Albany NY, USA

June 1, 2015

Abstract

Information physics considers physical laws to result from the consistent quantifica-tion of information one possesses about physical phenomena [1]. In previous efforts,we have shown that a simple model of particles that directly influence one anotherresults in a partially ordered set referred to as the influence network [2]. Both par-ticles and observers are represented by totally ordered chains of influence events.Consistent quantification of information about influence events from the perspectiveof a pair of coordinated observers results in a unique description of events charac-terized by the Minkowski metric and Lorentz transformations of special relativity[3]. That is, a particle that influences others in this influence network is describedby the mathematics that describes motion through space and time.

Walsh and Knuth [4] showed that when a particle is influenced by an observer,its behavior is described by relativistic acceleration, which in the case of a constantrate of influence is consistent with the relativistic version of Newton’s Second Law.Here, we extend this work to the case in which the particle is influenced by anotherparticle, rather than an observer. We find that receipt of influence gives rise togeodesic equations of the form from general relativity in 1+1 dimensions, in whichthe Christoffel symbols obey a coordinate condition and are functions of the ratesat which influence is received. Consistency with special relativity in the case of asmall, constant rate of influence and with Newton’s Second Law in the case of smallrates of influence is again confirmed.

References:[1] Knuth K.H. 2010. Information physics: The new frontier. MaxEnt 2010,

AIP Proc. 1305, 3-19. arXiv:1009.5161 [math-ph].[2] Knuth K.H. 2014. Information-based physics: an observer-centric foundation.

Contemporary Physics, 55(1), 12-32, arXiv:1310.1667 [quant-ph].[3] Knuth K.H., Bahreyni N. 2014. A potential foundation for emergent space-

time, Journal of Mathematical Physics, 55, 112501. arXiv:1209.0881 [math-ph].[4] Walsh J.L., Knuth K.H. 2015. Information-based physics, influence and

forces. MaxEnt 2014, AIP Proc. 1641, pp. 538-547. arXiv:1411.2163 [quant-ph].

Key Words: acceleration, force, general relativity, geodesic equations, influencetheory, information physics, motion, relativity, special relativity

Page 17: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Tuesday July 21, 2015 Tuesday Morning 8:30 – 9:15 Registration

9:15 – 10:15 Invited Speaker: Purushottam Dixit Maximum Path Entropy Random Walks

10:15 – 10:45 Sergio Davis Applications of the Divergence Theorem in Bayesian Inference and MaxEnt

10:45 – 11:30 Break

11:30 – 12:00 Keith Earle Parameter Estimation of Saturated Magnetic Resonance Spectra

12:00 – 12:30 Diego González Liouville's Theorem from the Principle of Maximum Caliber in Phase Space

Lunch 12:30 – 2:15

Tuesday Afternoon 2:15 – 3:15 Invited Speaker: Ariel Caticha Entropic Dynamics

3:15 – 4:00 Break

4:00 – 4:30 Barrie Stokes Equidistribution Testing with the ECT

4:30 – 5:00 Steven H. Waldrip Maximum Entropy Derivation of Quasi-Newton Methods

5:00 – 6:00 Break

Poster Session 6:00 – 9:00

Page 18: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Tuesday July 21, 2015 Poster Session 6:00 – 9:00 Abedi, Mohammad

Entropic Dynamics on Curved Spaces

Bartolomeo, Daniel M. Entropic Dynamics: the Schrodinger Equation and its Bohmian Limit

Caticha, Nestor Sympatric Multiculturalism or How Distrust Polarizes Societies of Bayesian Agents into Groups

Gai, Anthony D. Bayesian Model Testing of Models for Ellipsoidal Variation on Stars Due to Hot Jupiters

Kim, Sunil L. Describing Simple States of Knowledge with Exact Probabilities

Mubeen, Muhammad Asim Bayesian Regularization of Diffusion Tensor Maps Using Gibbs Prior on Riemannian Manifold

Nawaz, Shahid Entropic Dynamics of Spin-Half Particles

Placek, Ben The Detection and Characterization of Exoplanets with EXONEST

Udagedara, Indika Gayani Kumari Reduced Order Modeling for Monte Carlo Simulations in Radiation Transport

Ximenez de Azevedo Neto, Silvio On Recent Information Theory Applications in Finance

Page 19: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

MAXIMUM PATH ENTROPY RANDOM WALKS

Purushottam Dixit1, Ken Dill2

(1) Department of Systems Biology, Columbia University(2) Laufer Center for Quantitative Biology,

Department of Chemistry,and Department of Physics and Astronomy,

Stony Brook University(e-mail: [email protected])

Abstract

We define a class of maximum path entropy random walks constrained to repro-duce state- and path-dependent averages. We analytically derive Markov processderscribing the random walk in discrete and continuous time as well as discreteand continuous space. The stationary distribution of the random walk has surpris-ing connections to quantum mechanics, it is the product of the left and the rightPerron-Frobenius eigenvectors of a matrix operator. Moreover, the transition ratesof a continuous time random walk on a discrete free energy landscape are propor-tional to the square roots of the equilibrium probabilities. We illustrate our resultsby successfully predicting the transition rates in complex biomolecular systems.

Key Words: Path entropy, random walks

Page 20: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

APPLICATIONS OF THE DIVERGENCETHEOREM IN BAYESIAN INFERENCE AND

MAXENT

Sergio Davis, Gonzalo Gutierrez

Departamento de Fısica, Facultad de Ciencias, Universidad de Chile

Abstract

Given a probability density P (x|λ) where x represents continuous degrees offreedom, it is possible to construct a general identity relating expectations[1]⟨ ∂ω

∂xi

⟩λ

+⟨ω∂

∂xilnP (x|λ)

⟩λ

= 0, (1)

where ω(x, λ) is an arbitrary, differentiable vector field.In this work we explore some of the consequences of this relation, both in the

context of sampling distributions and Bayesian posteriors, and how it can be used toextract some information without the need for explicit calculation of the partitionfunction (or the Bayesian evidence, in the case of posterior expectations). Togetherwith the general form of the fluctuation–dissipation theorem,

∂λj

⟨w⟩λ

=⟨ ∂w∂λj

⟩λ

+⟨w

∂λjlnP (x|λ)

⟩λ, (2)

it constitutes a powerful tool for Bayesian/MaxEnt problems.

References:[1] S. Davis, G. Gutierrez, Phys. Rev. E 86, 051136 (2012).

Key Words: DIVERGENCE THEOREM, INFERENCE

Page 21: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

PARAMETER ESTIMATION OF SATURATEDMAGNETIC RESONANCE SPECTRA

K. Earle1,2, T. Broderick1

(1) Physics Department, University at Albany (SUNY)(2) ACERT, Cornell University, Ithaca, NY

([email protected], http://www.earlelab.rit.albany.edu)

Abstract

We have shown in previous work that magnetic resonance spectra may be prof-itably analyzed by using concepts derived from information geometry[1,2]. Thatanalysis was restricted to the linear response regime where constraints, in the formof sum rules for example, induced a geometry on the line shape function treatedas a probability density function (PDF). The conditions that a line shape functionmust satisfy for it to be considered as a member of a family of PDF’s have also beendescribed[1]. Although careful analysis of the linear response line shape is a usefultool for quantifying important details of molecular structure and dynamics, com-plementary information is available from the non-linear response line shape whichis often sensitive to processes on longer time scales than the transient coherencesprobed in the linear regime. The non-linear response of a system satisfies constraintsthat do not allow for a direct interpretation in terms of a PDF, as the non-linear re-sponse may have absorptive and emissive character and thus cannot be constrainedto be non-negative. Nevertheless, entropic methods and concepts from informationgeometry may be used to compare simulated and measured non-linear spectra inorder to infer appropriate model parameter values. We demonstrate how this canbe achieved for a model system described by the Zeeman interaction of a magneticmoiety with an applied magnetic field modulated by stochastic processes such asrotational diffusion. For the case of Electron Paramagnetic Resonance (EPR), wewill show how the methods presented here may be extended to include hyperfineinteractions which arise due to couplings of the electron magnetic moment withnearby nuclei.

References:[1] K. Earle et al. Appl. Magn. Reson. 37, 865 (2010).[2] K. Earle and K. Hock. Appl. Magn. Reson. 45, 859 (2014).

Key Words: Magnetic Resonance, Non-linear Response, Entropic Methods, Pa-rameter Estimation, Stochastic Relaxation, Electron Paramagnetic Resonance

Page 22: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

LIOUVILLE’S THEOREM FROM THE PRINCIPLEOF MAXIMUM CALIBER IN PHASE SPACE

Diego Gonzalez, Sergio Davis

Departamento de Fısica, Facultad de Ciencias, Universidad de Chile

Abstract

One of the cornerstones in non–equilibrium statistical mechanics (NESM) isLiouville’s theorem,

∂ρ

∂t+ ρ,H = 0, (1)

a differential equation for the phase space probability ρ(x, p; t). This is usuallyderived[1] considering the flow in or out of a given surface for a physical system(composed of atoms), via more or less heuristic arguments.

In this work, in the context of plausible inference over trajectories, we showthat the probability ρ(x, p; t) of finding a “particle” with Lagrangian L(x, x; t) in aspecific point (x, p) in phase space at time t, which is of the form,

ρ(x0, p0; t0) =⟨δ(x(t0)− x0)δ(p(t0)− p0)

⟩(2)

follows the Liouville equation, with p = ∂L/∂x. As this derivation depends onlyon principles of information theory, our result is valid not only for “physical” sys-tems but for any model depending on constrained information about position andvelocity[2], such as time series.

References:[1] W. Greiner, L. Neise and H. Stocker, Thermodynamics and Statistical Me-

chanics, 2012.[2] D. Gonzalez, S. Davis and G. Gutierrez, Found. Phys. 44, 923-931 (2014).

Key Words: MAXIMUM CALIBER, PHASE SPACE

Page 23: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Entropic Dynamics

Ariel CatichaDepartment of Physics

University at Albany (SUNY), Albany NY, USA

July 18, 2015

Abstract

I want to convey the idea that Jaynes’ old MaxEnt was extremely successfulat essentially static problems and that although many applications remain to betackled there is great potential for new and easy developments in dynamics. I will,naturally, focus on physics (Entropic Dynamics) but developing dynamical theoriesfor other fields (economics, ecology, social sciences) will probably follow similarideas.

Page 24: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Equidistribution Testing with the ECT

Barrie Stokes1, Frank Tuyl1, Irene Hudson1 (1) University of Newcastle, NSW, Australia

([email protected])

Abstract A central step in John Skilling's Nested Sampling Algorithm is the generation of a new random sample from the (typically uniform) prior distribution, subject to the constraint that the new prior sample's likelihood is greater than a current likelihood threshold. One way to test a generation method - the "outside in" approach - is to incorporate it in a Nested Sampling algorithm and compare the resulting model estimates with known cases. Another way - the "inside out" approach - is to validate the uniformity of prior samples produced by the new method before its incorporation in a Nested Sampling system. Using the "inside out" approach, we employ E T Jaynes' ECT (Entropy Concentration Theorem) [1], [2] and a Bayes factor approach to test and diagnose a method of producing new likelihood-restricted prior samples. This work is being carried out exclusively in Mathematica, and some animations will be presented. References:

[1] Jaynes, E.T (1979) Concentration of Distributions at Entropy Maxima, in Rosenkrantz R D (1983). [2] Jaynes E T (1983) Papers on Probability, Statistics, and Statistical Physics (ed. R D Rosenkrantz). Reidel, Dordrecht.

Key Words: nested sampling, entropy concentration, Bayes factor, equidistribution.

Page 25: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Maximum Entropy Derivation of Quasi-NewtonMethods

S. H. Waldrip1, R. Niven1

(1) The University of New South Wales, Canberra, Australia.([email protected])

Abstract

In this work we re-derive and improve upon quasi-Newton methods commonly usedto find the zeros or extrema of functions, using the maximum entropy (MaxEnt)method. Unlike Newton’s method, in which the Jacobian or Hessian matrix is cal-culated at each iteration, quasi-Newton methods find an approximation to the Jaco-bian or Hessian by updating it from the previous iteration. The updates generallyfollow the under-determined secant equation. The methodology used here differsfrom previous MaxEnt quasi-Newton derivations found in the literature [1, 2, 3, 4],in that it updates the average values of the Jacobian or Hessian rather than updat-ing a covariance matrix while keeping the mean fixed at zero. By approaching thederivation differently to previous studies, new insights were obtained into how quasi-Newton methods behave and how they can be improved. Numerical experimentsdemonstrate several improvements on existing quasi-Newton methods.

References

[1] R. Fletcher, “A New Variational Result for Quasi-Newton Formulae,” SIAMJournal on Optimization, vol. 1, no. 1, pp. 18–21, 1991.

[2] M. Bakonyi and H. J. Woerdeman, “Maximum Entropy Elements in the Inter-section of an Affine Space and the Cone of Positive Definite Matrices,” SIAMJournal on Matrix Analysis and Applications, vol. 16, no. 2, pp. 369–376, 1995.

[3] T. Kanamori and A. Ohara, “A Bregman extension of quasi-Newton updatesI: an information geometrical framework,” Optimization Methods and Software,vol. 28, no. 1, pp. 96–123, 2013.

[4] T. Kanamori and A. Ohara, “A Bregman extension of quasi-Newton updatesII: Analysis of robustness properties,” Journal of Computational and AppliedMathematics, vol. 253, pp. 104–122, Dec. 2013.

Page 26: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Tuesday July 21, 2015 Poster Session 6:00 – 9:00 Abedi, Mohammad

Entropic Dynamics on Curved Spaces

Bartolomeo, Daniel M. Entropic Dynamics: the Schrodinger Equation and its Bohmian Limit

Caticha, Nestor Sympatric Multiculturalism or How Distrust Polarizes Societies of Bayesian Agents into Groups

Gai, Anthony D. Bayesian Model Testing of Models for Ellipsoidal Variation on Stars Due to Hot Jupiters

Kim, Sunil L. Describing Simple States of Knowledge with Exact Probabilities

Mubeen, Muhammad Asim Bayesian Regularization of Diffusion Tensor Maps Using Gibbs Prior on Riemannian Manifold

Nawaz, Shahid Entropic Dynamics of Spin-Half Particles

Placek, Ben The Detection and Characterization of Exoplanets with EXONEST

Udagedara, Indika Gayani Kumari Reduced Order Modeling for Monte Carlo Simulations in Radiation Transport

Ximenez de Azevedo Neto, Silvio On Recent Information Theory Applications in Finance

Page 27: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Entropic Dynamics on Curved Spaces

S. Nawaz, M. Abedi, A. CatichaDepartment of Physics, University at Albany (SUNY)

Albany, NY 12222, USA

Abstract

Entropic dynamics (ED) is a framework in which quantum theory is derivedas an example of entropic inference. Entropic inference is designed to handle in-complete information in a natural way. In entropic dynamics no underlying actionprinciple is assumed, on the contrary, an action principle and corresponding Hamil-tonian dynamics are derived as a non-dissipative diffusion. In previous works theSchrodinger equation for N particles on flat space has been derived. It was shownthat the particular form of the quantum potential that leads to the Schrodingerequation follows naturally from information geometry.

The objective of this paper is to extend the entropic dynamics of N particles tocurved spaces. This is an important preliminary step toward an entropic dynamicsof gravity. An important new feature is that the displacement of a particle doesnot transform like a vector. It involves second order terms that account for theeffects of curvature. Using information geometry we calculate the metric MAB ofconfiguration space which is a curved generalization of the flat space mass tensor.This metric is needed to calculate the curved space quantum potential. The finalresult is a modified Schrodinger equation for curved spaces that takes into accountthe effects of curvature,

ihΨ = −1

2h2∆MΨ + VeffΨ where ∆M =

1√M

∂A

(√MMAB∂B

)is Laplace-Beltrami operator and the potential Veff includes the standard potentialand possible contributions due to curvature.

Key Words: Entropic Dynamics, Quantum Theory, Riemannian Manifold, Infor-mation Geometry

Page 28: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Entropic Dynamics: the Schrodinger Equation and itsBohmian Limit

D. Bartolomeo1 †, A. Caticha1 ‡

(1) Physics, University at Albany – SUNY, Albany, NY 12222, USA(†[email protected]; ‡[email protected] )

Abstract

Entropic Dynamics (ED) is a framework used for the construction of dynamicaltheories on the basis of entropic inference. In the application of ED to derive then-particle Schrodinger equation the physical input is introduced through constraintsthat are implemented using Lagrange multipliers. There is one set of n constraints,one for each particle that controls the quantum fluctuations. The correspondingmultipliers are related to the mass of the particles; in ED mass is a measure offluctuations. In addition, there is another constraint involving a “drift” potentialthat correlates the motions of different particles. The drift potential eventuallyshows up as the phase of the wave function. The corresponding multiplier α′ controlsthe tendency for the system to drift along a particular direction in configurationspace. In this work we uncover a new symmetry of Quantum Mechanics: we showthat different values of α′ will all lead to the same Schrodinger equation; it is as ifdifferent “microscopic” or sub-quantum models lead to the same “macroscopic” orquantum behavior. In the limit of large α′ the drift prevails over the fluctuationsand the particles tend to move along the smooth probability flow lines. Thus EDincludes the causal or Bohmian form of quantum mechanics as a special limitingcase.

Key Words: Entropic Dynamics, Quantum Theory, Maximum Entropy, BohmianMechanics

Page 29: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Sympatric multiculturalism or how distrust polarizessocieties of Bayesian agents into groups

Felippe Alves and Nestor CatichaInstituto de Fısica

Universidade de Sao Paulo, Brazil

Abstract

While social interactions tend to decrease differences in opinions, multiplicity ofgroups and individual opinion differences persist in human societies. Axelrod iden-tified homophily and social conformity seeking as basic interactions that can leadto multiculturalism in spatial scenarios in models under certain special conditions.We follow another route, where the social interactions between any two agents isgiven by the descent along the gradient of a cost function deduced from a Bayesianlearning formalism. The cost functions depends on a hyperparameter that estimatesthe trust of one agent on the information provided by the other. If the expectedvalue of the total cost function is relevant information, Maximum Entropy permitscharacterizing the state of the society. Furthermore we introduce a dynamics onthe trust parameters, which increases when agents concur and decreases otherwise.We study the resulting phase diagram in the case of large number of interactingagents on a complete social graph, hence under sympatric conditions. Simulationsshow that there is evolution of assortative distrust in rich cultural environmentsmeasured by the diversity of the set of issues under discussions. High distrust leadsto antilearning which leads to multiple groups which hold different opinions on theset of issues. We simulate conditions of political pressure and interaction that de-scribe the House of Congress of Brazil and are able to qualitatively replicate votingpatterns through four presidential cycles during the years of 1994 to 2010.

Key Words: Multiculturalism, Bayesian agents, systems of opinions

Page 30: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Bayesian Model Testing of Models for EllipsoidalVariation on Stars Due to Hot Jupiters

Anthony D. Gai1, Kevin H. Knuth1,2

(1) Dept. of Physics, University at Albany (SUNY), Albany, NY 12222(2) Dept. of Informatics, University at Albany (SUNY), Albany, NY 12222

Abstract

A massive planet closely orbiting its host star creates tidal forces that distort thetypically spherical stellar surface. These distortions, known as ellipsoidal varia-tions, result in variations in the photometric flux emitted by the star, which canbe detected by the Kepler Space Telescope. Currently there exist several mod-els describing such variations and their effect on the photometric flux (Faigler andMatzeh, 2011; Kane and Gelino, 2012; and Jackson et al., 2012). By using Bayesianmodel testing in conjunction with the Bayesian-based exoplanet characterizationsoftware package EXONEST (Placek, Knuth, Angerhausen, 2014), the most proba-ble representation for ellipsoidal variations was determined for the stars hosting theexoplanets KOI-13b and HAT-P-7b. Providing a more accurate model of ellipsoidalvariations will result in better estimations of planetary properties.

References:

[1] Faigler, S. and Mazeh, T., arXiv:1106.2713, 14 June 2011.

[2] Kane, Stephen R., and Gelino, Dawn M., ”Distinguishing between Stellarand Planetary Companions with Phase Monitoring.” Monthly Notices of the RoyalAstronomical Society 424.1 (2012): 779-88.

3] B. K. Jackson, N.K. Lewis, J.W. Barnes, L.D. Deming, A.P. Showman, andJ.J. Fortney, The EVIL-MC Model for Ellipsoidal Variations of Planet-HostingStars and Applications to the HAT-P-7 System, The Astrophysical Journal, 751:112(13pp), 2012 June 1.

[4] Placek, B. and Knuth, Kevin H., arXiv:1310.6764v2, 15 Sep 2014.

Key Words: Bayesian, Exoplanets, Model Testing

Page 31: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

DESCRIBING SIMPLE STATES OF KNOWLEDGEWITH EXACT PROBABILITIES

Sunil L. Kim1, Christopher D. Fiorillo1

(1) Department of Bio and Brain Engineering, KAIST, Daejeon 305-701, Korea(e-mail: [email protected])

Abstract

Our long-term goal is to understand the physical basis of knowledge and reason,particularly with respect to brain function. We believe that this requires findingprobability distributions over external states conditional only on the internal state ofa physical entity (“observer”). Whereas much of the recent work on probabilities hasdeveloped methods of “approximate inference” to describe more and more complexstates of knowledge (for data analysis), we believe that the local knowledge presentwithin a physical entity at one place and time is remarkably simple, even withinthe brain. We also believe that although the brain is specialized for gatheringinformation and making predictions (or inferences), information and prediction arenot exclusive to brains, or biology, but are inherent to any physical system.

Exact probabilities have previously been found for states that are simple butnot realistic. For example, binomial distributions assume a state space that is onlyan approximation of reality (a coin must have more than 2 sides, and it is nevercertain to land following a toss). The Gaussian form has been shown to perfectlydescribe simple states (only a location and scale), but specific Gaussian distributionshave been shown to be exact only for infinite sets of data. We have derived exactGaussian distributions for finite sets of data without any assumptions. Here weintroduce the problem and our method for solving it.

We begin with the simplest states of knowledge that we can imagine, within thecontext of geometry and without any notions from physics. Given only a spatialconfiguration of N known points, where is an unknown point? We do not presentthe solution here but we have found it to be a Gaussian distribution, having circularsymmetry given knowledge of only one point and the distance to a second point, andhaving elliptical symmetry in the case of 2 points. We believe it will be Gaussianfor any configuration of N points. The Gaussian form was not a surprise, butwe emphasize that our solution is exact, with each configuration being describedby a unique distribution. Furthermore, we find the principle of indifference to besufficient in solving this problem, without formal use of the method of maximumentropy.

Key Words: Bayesian, Jaynes, geometry, symmetry, inference, prediction, infor-mation, logic, reason

Page 32: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Bayesian Regularization of Diffusion Tensor Maps

Using Gibbs Prior on Riemannian Manifold

Asim M Mubeen1, Babak A Ardekani

1, 2, John J Sidtis

1, 2

(1) The Nathan S. Kline Institute for Psychiatric Research 140 Old

Orangeburg Road, Orangeburg, NY 10962

(2) Department of Psychiatry, New York State University School of

Medicine, New York NY 10016.

Abstract

Diffusion tensor imaging (DTI) is a powerful tool for investigating the brain white matter

connections. White matter connections play a vital role in the integrity and normal operation

of the brain. Regularization can have a significant impact on DTI analysis. However

regularization should be performed with care as the process can obliterate some important

features in DTI, and does not necessarily ensure positive definiteness of diffusion tensors.

We are proposing a Bayesian regularization technique, which is a feature preserving de-

noising technique, and can ensure positive definiteness of estimated diffusion tensors from

diffusion weighted magnetic resonance data. In the proposed technique, prior information

regarding the fiber structure is exploited. We used Gibbs-type prior to regularize the diffusion

tensor maps. Riemannian manifold was used to compute Riemannian distances between

diffusion tensors to avoid the negative eigenvalue problem [1] .The implementation of the

proposed technique is presented using in-vivo DTI data. [Supported by: NIH DC007658]

References:

[1] X. Pennec, P. Fillard, and N. Ayache, International Journal of Computer Vision 66, 41

(2006).

Key Words: DTI, MRI, Regularization of DTI, Connectome

Page 33: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Entropic Dynamics of Spin-Half Particles

Shahid NawazDepartment of Physics, University at Albany-SUNY,

Albany, NY 12222, USA.

Abstract

Entropic dynamics is a framework in which quantum theory is driven by entropysubject to the appropriate constraints. In previous works the Schrodinger equationon flat space as well as on curved spaces has been derived. In this paper we discussan application of entropic dynamics on curved spaces.

The goal of this paper is to develop entropic dynamics of spin-1/2 particles. Theparticle is modeled as a rigid rotator interacting with an external EM field. Theconfiguration space of such a rotator is R3 × SU(2) which is a curved space. Inthe limit of a point particle (moment of inertia I → 0) the states corresponding todifferent spins become uncoupled. The model describes the regular representationof SU(2) which includes all the irreducible representations (spin 0, 1/2, 1, 3/2,. . .) including spin 1/2.

Key Words: Entropic Dynamics, Quantum Theory, Group Theory.

Page 34: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

The Detection and Characterization of Exoplanetswith EXONEST

Ben Placek1, Kevin H. Knuth2,3

(1) Department of Math, Science, and TechnologySchenectady County Community College, Schenectady NY, USA

(2) Department of Physics(3) Department of Informatics

University at Albany (SUNY), Albany NY, [email protected]@albany.edu

Abstract

The EXONEST software package has recently been developed for the purposes ofthe photometric detection and characterization of exoplanets. EXONEST employs aBayesian inference engine, which allows one to simultaneously perform both parame-ter estimation and Bayesian model testing. The photometric model used in EXON-EST incorporates four known photometric effects (Reflection, Thermal Emission,Doppler Boosting, and Ellipsoidal Variations) in addition to transits and secondaryeclipses. This method of characterization has been tested on known exoplanetarysystems such as Kepler-13Ab, HAT-P-7b, and Kepler-91b. A brief overview of thesoftware and previous results from EXONEST will be presented as well as potentialapplications for future space-based telescopes, the use of multi-color photometryand the prospect of using EXONEST as a stand-alone detection technique.

Page 35: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

REDUCED ORDER MODELING FOR MONTECARLO SIMULATIONS IN RADIATION

TRANSPORT

I. Udagedara1, B. Helenbrook1, A. Luttman2

(1) Clarkson University, Potsdam, USA(2) National Security Technologies, LLC, USA

Abstract

Monte Carlo (MC) simulations are a powerful tool for modeling radiation transport(RT), but require significant computing resources to obtain accurate results. In thiswork, we develop a proper orthogonal decomposition (POD) based reduced ordermodeling (ROM) approach to reduce the number of MC particle histories that mustsimulated to obtain statistically significant results. The POD typically is done inspace, but here we use it to generate orthogonal basis functions to describe theradiation energy spectrum. ROMs are generated for terrestrial radiation detectionscenarios and the potential for reducing the number of particle histories neededfor a given accuracy is demonstrated. The accuracy can be further improved byselecting the optimal number of POD basis functions for the ROM. To this end, aBayesian statistical framework for analyzing the ROM is adopted. Future work is touse Bayesian model selection with appropriate priors and optimization algorithmsto determine the optimal number of basis functions and estimate the uncertaintyassociated with the ROM.

References:[1] I. Udagedara et al., Reduced order modeling for accelerated Monte Carlo

simulations in radiation transport, Appl. Math. Comput. (2015),http://dx.doi.org/10.1016/j.amc.2015.03.113[2] Bishop, Christopher M. Pattern recognition and machine learning. Vol. 4.

No. 4. New York: springer, 2006.

Key Words: RADIATION TRANSPORT, REDUCED ORDER MODELING,PROPER ORTHOGONAL DECOMPOSITION, BAYESIAN STATISTICS

Page 36: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

ON RECENT INFORMATION THEORY APPLICATIONSIN FINANCE

S. X. Azevedo Neto1,2, H. H. Takada1,2, J. M. Stern2

(1) Quantitative Research, Itau Asset Management, Sao Paulo, Brazil(2) Institute of Mathematics and Statistics, University of Sao Paulo, Brazil

Abstract

In this paper, we review recent applications of information theory in finance providing a his-torical organization of the related literature together with a brief description of the financialproblems. Particularly, we identify the main information theory tools that have been applied tofinancial problems and the results achieved. We identified some important areas from financewhere the main applications of information theory evolved. The derivative pricing problem wasinitially addressed by [1], and extended by [2], using maximum entropy principle (MEP) to relaxthe usual constraints of traditional Black-Scholes framework. The asset allocation problem wasaddressed in [3], an extension of the work from [4], where MEP can be used to improve thetraditional asset allocation frameworks to explicitly consider diversification. On the other hand,the Bregman divergence [5,6] has been used as the objective function to be minimized in thenon-negative matrix factorization (NNMF) procedure. In the fixed income area, [7] obtaineda factor model using NNMF for the term structure of interest rates with improved fitting andinterpretability in relation to previous approaches from the literature. [8] focused the identifica-tion of common trends in stock prices using NNMF and [9] empirically concluded that NNMFis a suitable approach compared to other techniques proposing a new information criterion. Inthe market micro-structure area, the NNMF captured the intraday trading volume patternswith a better fitting and interpretability when compared to principal component analysis [10].Finally, in the algorithmic trading area, the minimum discrimination information principle wasadopted as a framework for the design of execution strategies [11]. In addition to the historicalorganization and description of the related financial problems, we point future directions in theapplication of information theory in finance.

References:[1] Y. Yang, Maximum entropy option pricing, Ph.D. Thesis, Florida State University, 1997.[2] H. H. Takada, and J. O. Siqueira, AIP Conf. Proc., 1073, 332 (2008).[3] R. A. Santos, and H. H. Takada, AIP Conf. Proc., 1636, 165 (2014).[4] A. K. Bera, and S. Y. Park, Optimal portfolio diversification using maximum entropy principle,

Dept. Economics, University of Illinois, 2005.[5] I. S. Dhillon and S. Sra, Generalized nonnegative matrix approximations with Bregman divergences,

Advances in Neural Information Processing Systems 18, 283 (2005). [6] L. Li, G. Lebanon and H. Park,Fast Bregman divergence NMF using Taylor expasion and coordinate descent, Proc. 18th ACM SIGKDDInt. Conf. on Knowledge Dicovery and Data Mining (2012).

[7] H. H. Takada, and J. M.Stern, AIP Conf. Proc., 1641, 369 (2014).[8] K. Drakakis, S. Richard, R. de Frin and A. Cichocki, Analysis of financial data using non-negative

matrix factorization, Int. Math. Forum 3 (38), 1853 (2008).[9] H. H. Takada, and J. M.Stern, AIP Conf. Proc., 1641, 378 (2014).[10] H. H. Takada, and J. M.Stern, Intraday Trading Volume and Non-Negative Matrix Factorization,

MaxEnt (2015).

[11] H. H. Takada, On Execution Strategies and Minimum Discrimination Information Principle,

MaxEnt (2015).

Key Words: Information Theory, Entropy, Matrix Factorization, Financial Markets.

Page 37: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Wednesday July 22, 2015 Wednesday Morning 8:30 – 9:15 Registration

9:15 – 10:15 Invited Speaker: Renaldus Urniezius Probabilistic Convex Programming for Semi-Globally Optimal Resource

Allocation

10:15 – 10:45 Ariel Caticha Geometry from Information Geometry

10:45 – 11:30 Break

11:30 – 12:00 H. H. Takada On Execution Strategies and Minimum Discrimination Information Principle

12:00 – 12:30 H. H. Takada Intraday Trading Volume and Non-Negative Matrix Factorization

Lunch 12:30 – 2:15

Wednesday Afternoon 2:15 – 2:45 H.R.N. van Erp A Bayesian Decision Theory, PART I

2:45 – 3:15 H.R.N. van Erp A Bayesian Decision Theory, PART II

3:15 – 4:00 Break

4:00 – 4:30 R. Wesley Henderson A Simple Approach to Parallelizing Nested Sampling

4:30 – 5:00 Udo von Toussaint Investigations on Bayesian Uncertainty Quantification with Two Examples

Page 38: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

PROBABILISTIC CONVEX PROGRAMMING FOR SEMI-GLOBALLY OPTIMAL RESOURCE

ALLOCATION

R. Urniezius1,2 (1) Dpt. of Automation, Kaunas University of Technology, Studentu 48, LT-51367 Kaunas,

Lithuania; (2) IEEE Member

Abstract

We discuss high speed solution of optimal resource allocation problem in financial services market. In spite of the fact that our solution involves inequalities, prior information, likelihood and predictions; this approach can be used for any resource allocation including, but not limited to, Pontryagin principle applications of control theory or even estimation tasks. The motivation for the solution of this problem originated from one of the ten world's largest economies at its national multi-bank (consortium of more than 30 banks) cash operator infrastructure system. It distributes cash remittances to more than 120 central warehouses located across the country. The prediction and the period to allocate for was one month. The aim of the resource allocation was to optimally distribute the cash in such a way that: 1) the implementation is scalable, i.e. the recalculation performance takes less than 60 seconds; 2) allocation behavior follows the pattern of what bank historically wanted to remit (regional banks, big/small amounts, days of the week preferences); 3) the allocation satisfies the contract obligations, i.e. the allocation proposal should not contradict to historical likelihood and also should repeatedly lead to the contract rules; 4) all banks must play a fair game and system should suggest optimal solution which gradually leads to it, i.e. parasitizing banks must get more remittances requests; 5) the number of monthly remittances must be reduced compared to the historic statistics; 6) the system must be self-learning, i.e. the constraints of ignorant banks must evolve. The implementation was motivated by the idea of maximization of relative entropy (MrE) [1, 2]. Historical record of 2014 month showed that 1699 remittances occurred. Meanwhile MrE approach would have allocated for the same period just 1103 remittances (reducing logistics expenditures by 35%), not speaking about the fact that more banks would have been repeatedly asked to play a fair game. References: [1] Giffin, A.; Urniezius, R. Simultaneous State and Parameter Estimation Using Maximum Relative Entropy with Nonhomogenous Differential Equation Constraints. Entropy 2014, 16, 4974-4991.

[2] Giffin, A.; Urniezius, R. The Kalman Filter Revisited Using Maximum Relative Entropy. Entropy 2014, 16, 1047-1069.

Key Words: Convex Programming, Maximum relative Entropy, Optimal Control, Resource Allocation, Inequality Constraints, Likelihood.

Page 39: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

GEOMETRY FROM INFORMATION GEOMETRY

Ariel Caticha Department of Physics, University at Albany, NY 12222, USA

[email protected] http://www.albany.edu/physics/acaticha.shtml

Abstract

Entropic dynamics is founded on the premise that the laws of physics do not directly describe nature. Instead they provide a framework for processing information about nature. This view imposes severe restrictions on physical models: The probabilities that appear in physics must be epistemic, they must reflect uncertainty; the entropies that appear in physics must be information entropies; and finally, and perhaps least expected, the geometries that pervade physics must be traced to information geometry.

In this paper we use the method of maximum entropy to construct a three-dimensional curved statistical manifold. The basic idea is that the points of the manifold are not defined with perfect resolution: when we say a particle is located at a point x the actual location y is uncertain. Thus, points in this manifold have the structure of a probability distribution, p(y|x), and the manifold is automatically endowed with an information metric.

The construction is straightforward except for one technical difficulty: Coordinates carry no information and therefore it is essential to maintain covariance under coordinate transformations. On the other hand, the expected value constraints required by the method of maximum entropy do not transform covariantly. (This follows trivially from ˂ f(y)˃≠ f(˂y˃.) The difficulty is overcome by applying the method of maximum entropy in the flat tangent spaces and using the exponential map to induce probabilities on the manifold itself.

We find that the resultant information metric does not describe the full geometry of the statistical manifold but only its conformal geometry. Remarkably, this is precisely what is needed to model “physical” space in general relativity. We therefore propose that the statistical manifold constructed here is a promising model for physical space.

Page 40: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

ON EXECUTION STRATEGIES AND MINIMUMDISCRIMINATION INFORMATION PRINCIPLE

H. H. Takada1,2

(1) Quantitative Research, Itau Asset Management, Sao Paulo, Brazil(2) Institute of Mathematics and Statistics, University of Sao Paulo, Brazil

Abstract

In the mid 1990s, several large stock exchanges in USA and Europe were alreadytrading a considerable proportion of their volume electronically. Obviously, theelectronic trading has enabled the automation of trading strategies [1]. Executionstrategies, the object of study of this paper, represent a subset of trading strategies.In practice, execution strategies are part of the service provided by traders and bro-kers to their clients to execute huge amount of orders in several different marketsaccording to some constraints such as predetermined execution time intervals andlow market impact to avoid adverse price distortions. Actually, the best executionis an important concern when talking about trading regulation [2] and the use ofautomated algorithms makes the execution process easy to audit. There are someapproaches for the design of execution strategies from ad hoc procedures to opti-mization frameworks [3]. In this paper, we introduce the use of information theoryfor the design of execution strategies. We propose an execution impact cost functionbased on Kullback-Leibler [4] divergence to be used in the derivation of executionstrategies in general using a quite general definition for market liquidity. Formally,we present two theorems deriving the time weighted average price (TWAP) and vol-ume weighted average price (VWAP) strategies using the minimum discriminationinformation principle. Finally, we apply the developed framework to obtain someVWAP tilted execution strategies.

References:[1] K. Kim, Electronic and Algorithmic Trading Technology: The Complete

Guide, Elsevier Inc., Burlington, 2007.[2] C. D’Hondt, and J.-R. Giraud, Transaction Cost Analysis A-Z: A Step to-

wards Best Execution in the Post-MiFID Landscape, EDHEC Risk and Asset Man-agement Research Centre, Nice, 2008.

[3] R. Kissell, and M. Glantz, Optimal Trading Strategies: Quantitative Ap-proaches for Managing Market Impact and Trading Risk, AMACOM, Inc., NewYork, 2003.

[4] S. Kullback, and R. A. Leibler, Ann. Math. Statist. 22 (1), 79–86 (1951).

Key Words: Information theory, Entropy, Financial markets.

Page 41: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

INTRADAY TRADING VOLUME ANDNON-NEGATIVE MATRIX FACTORIZATION

H. H. Takada1,2, J. M. Stern2

(1) Quantitative Research, Itau Asset Management, Sao Paulo, Brazil(2) Institute of Mathematics and Statistics, University of Sao Paulo, Brazil

Abstract

In the literature, the intraday trading volume for equities has been reported topossess an intraday U-shaped pattern [1] and several modeling approaches wereproposed (e.g. it is used a beta density function to fit the U-shaped pattern in[2]). On the other hand, a recent factorization technique is the non-negative matrixfactorization (NNMF) [3]. Observing that the data matrix D containing the tradedvolume is non-negative D = [dij ] ∈ Rm×p

≥0 and given the number of factors k, theNNMF approach aims to find the following approximation

D ≈ D = F L, (1)

where D = [dij ] ∈ Rm×p≥0 ; F = [fij ] ∈ Rm×k

≥0 ; L = [lij ] ∈ Rk×p≥0 . It is important to

state that the columns of F are the factors and the rows of L are the factor loadings.The NNMF optimization procedures minimize the approximation error between Dand D. In a generalized way, the Bregman divergence Dϕ(D‖D) is used as theobjective function to be minimized [4], where ϕ(.) is a strictly convex function witha continuous first derivative. In this paper, NNMF was for the first time appliedto capture the intraday trading volume patterns. Considering NNMF with onlyone factor, we identified for our selected equities the well-known U-shaped intradaytrading volume pattern. The U-shaped pattern is very important for executionstrategies based on volume weighted average price (VWAP). Additionally, we alsoidentified interpretable factors when considering NNMF with two factors. Onefactor represents the volume level at the start of the trading day and the otherfactor represents the volume level at the end of the trading day. The two factorsenable the individual study of the trading volume level at the start and at the end ofthe trading day. As expected, our empirical results show that for a given number offactors the NNMF has a higher percentage of explained variance and lower residualsum of squares than principal component analysis (PCA).

References:[1] P. J. Jain, and G. Joh, J. Financ. Quant. Anal. 23 (3), 269–284 (1988).[2] E. Panas, Appl. Econ. 37 (2), 191–199 (2005).[3] D. Lee, and H. Seung, Nature 401 (6755), 788–791 (1999).[4] I. S. Dhillon, and S. Sra, Adv. Neural Inf. Process. Syst. 18, 283–290 (2005).

Key Words: Information theory, Entropy, Financial markets.

Page 42: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

A BAYESIAN DECISION THEORY, PARTS I & II

H.R.N. van Erp1, R.O. Linger1, P.H.A.J.M. van Gelder1

(1) TU Delft, Faculty of Science and Safety(h.r.n.vanerp@tudelfft)

Abstract

The most primitive decision theoretical assumption is that we ought to maximizesome measure of our utility probability distribution1. It has been found that thesum of the lower and upper confidence bounds of the utility probability distributionsallow us to order the positions of utility probability distributions transitively [1].

So, seeing that on a positively oriented utility axis a more-to-the-right positioncorresponds with the most advantageous, be it in terms of least detrimental or mostprofitable, state of affairs, we propose in our decision theory to maximize the sumof the lower and upper confidence bounds of the utility probability distributions3,rather than their expectation values, as was initially proposed by Bernoulli [2].

In Part I, we will show that Kahneman and Tversky’s S-shaped fair probabilitycurve for certainty bets, which were found empirically and form the corner stoneof prospect theory [3], is predicted from first principles by the Bayesian decisiontheory. It will be concluded that Kahneman and Tversky, by way of their militantanti-Bayesian stance, have done a great service to the Bayesian community4, as thisforced us to take a closer at the implicit assumptions which gave rise to Bernoulli’soriginal criterion of choice.

In Part II, we will revisit Jaynes in rationale-of-insurance example [4], using theadjusted criterion of choice that the sum of lower and upper bounds, rather than theexpectation values, of the utility probability distributions should be maximized5.

1In the Bayesian decision theory outcome probability distribution are the information carriers of theplausible outcomes following some action. If these outcomes admit an absolute measure scale2, then, byway of the Bernoulli utility function, we may map utilities to the outcomes of our outcome probabilitydistributions. The resulting utility probability distributions are then the information carriers of theplausible severity of the consequences following some action.

3Note that the alternative criterion of choice holds Bernoulli’s orginal criterion of choice as a specialcase, when the lower and the upper confidence bounds do not exceed the range of their utility probabiltydistributions.

4We consider Daniel Bernoulli, who both predated and was endorsed by Laplace, to be a proto-Bayesian.

5This three-page example provided us at the begining of our decision theoretical research with theauthoritative blue-print of how a non-trivial decision theoretical analysis should be structured like. Assuch, we feel that the here presented research is a further building upon the foundations laid by Jaynes.

1

Page 43: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

References: [1] H.R.N. van Erp, R.O. Linger, and P.H.A.J.M. van Gelder: FactSheet Research on the Bayesian Decision Theory, (2014).

[2] D. Bernoulli: Exposition of a New Theory on the Measurement of Risk,(1738).

[3] A. Tversky and D. Kahneman: Advances in Prospect Theory: CumulativeRepresentation of Uncertainty, Journal of Risk and Uncertainty, 5: 297-323, (1992).

[4] E.T. Jaynes Probability Theory: The Logic of Science, pp.400-402, (2003).

Key Words:Bayesian Decision Theory, Expected Utility Theory, Rationale of Insurance

Page 44: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

A SIMPLE APPROACH TO PARALLELIZINGNESTED SAMPLING

R. Wesley Henderson1, Paul M. Goggans1

(1) Department of Electrical Engineering, University of Mississippi(email: [email protected])

Abstract

Nested sampling is an algorithm for estimating Bayesian evidence. The precision ofthe evidence estimate is limited by the number N of so-called “live” prior samplesthat are maintained within each likelihood constraint. The upper bound on thestatistical variance in the log-evidence estimate is 4n/N2 [1], where n is the numberof steps taken in the nested sampling process.

This upper bound indicates that we can improve the precision of the evidenceestimate by increasing N . There is a trade-off introduced here in that the time com-plexity of nested sampling scales linearly with N . We propose to address this timepenalty by delegating the computation to multiple workers operating in parallel.Each worker generates independent nested sampling chains using N live samples.Once each worker has finished, the resulting weighted prior samples are collectedand ranked by likelihood. Weights are recomputed for this combined set of samples,and the evidence is estimated using the standard first-order quadrature formula.

The nature of the nested sampling algorithm implies that this technique usingM independent chains each using N live samples produces an equivalent result toone chain using M ×N live samples. This assertion follows the argument made in[1] and [2] to justify using values of N greater than one.

We provide an additional theoretical argument as well as an empirical result tosupport this assertion. In addition, we compare the proposed technique to otherideas in the literature for parallelizing nested sampling, we propose ideas for scal-ing this technique to large-scale computing infrastructure, and we demonstrate theperformance of the technique using three illustrative examples.

References:[1] J. Skilling. AIP Conf. Proc. 1193, 277-291 (2009).[2] J. Skilling. Bayesian Analysis. 1, 833-859 (2006).

Key Words: MCMC, Nested sampling, Parallel computing

Page 45: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Investigations on Bayesian uncertainty quantification

with two examples

R. Preuss and U. von ToussaintMax-Planck-Institut fur Plasmaphysik,

85748 Garching, Germany

June 25, 2015

Abstract

Input quantities for the numerical simulation of fusion plasmas involve field quan-

tities which are hampered by noise. In order to compare data from experiment to

model results, or to have an estimation of the fluctuation margin of a model predic-

tion, a quantification of the uncertainties is necessary. Within a Galerkin framework

we employ a spectral expansion to represent the random process responsible for the

noise. Since Gaussian distributed noise is assumed Hermite polynomials are the ap-

propriate choice for the orthonormal basis system. The coefficients are derived from

collocation points in a nonintrusive approach. An instructive example of absorption

in media serves for the validation of the procedure. Finally the method is applied to

the Vlasov-Poisson model describing electrostatic plasmas, which will be influenced

by an uncertain external field.

Key Words: Uncertainty quantification, collocation, nonintrusive

Page 46: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Thursday July 23, 2015 Thursday Morning 8:30 – 9:15 Registration

9:15 – 10:15 Invited Speaker: Robert Niven Maximum Entropy Analysis of Flow Networks with Graphical Constraints

10:15 – 10:45 Robert Niven Bayesian Cyclic Networks, Mutual Information and Reduced-Order Bayesian Inference

10:45 – 11:30 Break

11:30 – 12:00 Ali Kahirdeh Acoustic Emission Entropy as a Measure of Damage in Materials

12:00 – 12:30 Luke K. Rumbaugh Blind Signal Separation for Underwater LIDAR Applications

Lunch 12:30 – 2:15

Thursday Afternoon 2:15 – 2:45 J. Landes Objective Bayesian Nets for Consistent Datasets

2:45 – 3:15 Selman Ipek Relational Entropic Dynamics of Many Particles

3:15 – 5:00 Break

Excursion/Banquet 5:00 – 9:00

Page 47: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Maximum Entropy Analysis of Flow Networks withGraphical Constraints

R.K. Niven1,4∗, M. Abel2, S.H. Waldrip1, M. Schlegel3, B.R. Noack4

(1) SEIT, The University of New South Wales at Canberra, Australia

(2) Ambrosys GmbH / University of Potsdam, Germany

(3) Technische Universitat Berlin, Germany (4) Institute PPrime, Poitiers, France

(*[email protected])

Abstract

The concept of a “flow network” – a set of nodes connected by flow paths – unites manydifferent disciplines, including electrical, communications, pipe flow, fluid flow, groundand air transportation, chemical reaction, ecological, epidemiological and human systems.Traditionally, flow networks have been analysed by conservation (Kirchhoff’s) laws and(in some systems) by network mappings (e.g. Tellegen’s theorem), and more recently bydynamical simulation and optimisation methods. A less well explored approach, however,is the use of Jaynes’ maximum entropy (MaxEnt) method [1-2], in which an entropy –defined over the total uncertainty in the network – is maximised subject to constraints,to infer the stationary state of the network. MaxEnt methods have also been applied tothe analysis of network structures (graph ensembles) subject to various configurationalconstraints [3], but mostly without consideration of flows or potentials on the network.

We present a generalised MaxEnt method to infer the stationary state of a flow net-work, subject to “observable” constraints on expectations of various parameters, “physi-cal” constraints such as conservation laws and frictional properties, and “graphical” con-straints arising from uncertainty in the network structure itself. The method invokes anentropy defined over all uncertainties within the system, which necessarily must includeall stochastic parameters (random variables). The analysis also requires an extended formof the “Jaynes relations” (first and second derivatives, susceptibilities and Maxwell rela-tions [1-2]), for systems with nonlinear constraints. The method is also demonstratedby application to several example systems, including: (i) a 1140-pipe urban water distri-bution network in Torrens, Australian Capital Territory, subject to nonlinear frictionalconstraints [4]; (ii) a 327-node urban electrical power distribution system in Campbell,Australian Capital Territory, including distributed power sources; and (iii) an electricalflow network with uncertainties in its network structure.

References:[1] E.T. Jaynes, Information theory and statistical mechanics, Phys. Rev., 106, 620-

630 (1957).[2] E.T. Jaynes (G.L. Bretthorst, ed.), Probability Theory: The Logic of Science, Cam-

bridge U.P., Cambridge (2003).[3] J. Park & M.E.J. Newman, Phys. Rev. E 70, 066117 (2004).[4] S.H. Waldrip, R.K. Niven, M. Abel, M. Schlegel, Maximum entropy analysis of

hydraulic pipe flow networks, in submission to J. Hydraulic Eng. (2015).

Page 48: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Bayesian Cyclic Networks, Mutual Information andReduced-Order Bayesian Inference

R.K. Niven1,2∗, B.R. Noack2, E. Kaiser2,3, L. Cattafesta3, L. Cordier2, M. Abel4

(1) SEIT, The University of New South Wales at Canberra, Australia

(2) Institute PPrime, Poitiers, France (3) Florida State University, USA

(4) Ambrosys GmbH / University of Potsdam, Germany

(*[email protected])

Abstract

A branch of Bayesian inference involves the analysis of so-called “Bayesian networks”,defined as directed acyclic networks composed of probabilistic connections [e.g. 1-2]. Weextend this class of networks to consider cyclic Bayesian networks, which incorporateevery pair of inverse conditional probabilities or probability density functions, therebyenabling the application of Bayesian updating around the network. Several examples areillustrated below. The networks are assumed Markovian, although this assumption canbe relaxed when necessary. The analysis of probabilistic cycles reveals a deep connectionto the mutual information between pairs of variables on the network. Analysis of a four-parameter network – of the form of a commutative diagram – is shown to enable thedevelopment of a new branch of Bayesian inference using a reduced-order model (coarse-graining) framework.

(a)

Ωi Ωj

p(i) p(j|i)

p(i|j) p(j)

(b)

Ωi Ωj

Ωk Ω`

p(i) p(j|i)

p(i|j) p(j)

p(k) p(`|k)

p(k|`) p(`)

p(k|i)

p(i|k

)

p(`|j)

p(j|`)p(

j|k)

p(k|j

)

p(`|i)

p(i|`)

Figure 1: Examples of (a) binary and (b) quaternary cyclic Bayesian networks.

References:[1] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible

Inference, Morgan Kaufmann, San Francisco, USA, 1988.[2] C.M. Bishop, Pattern Recognition and Machine Learning, Springer, NY, 2006.

Page 49: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Acoustic Emission Entropy as a Measure of Damage in

Materials Ali Kahirdeh1, Christine Sauerbrunn1, Mohammad Modarres1

Department of Mechanical Engineering

(1) University of Maryland, College Park, MD, USA (2) (Email: [email protected], [email protected] and

[email protected] )

Abstract

In this paper, we propose the information entropy of acoustic emission (AE) signatures as an

index for assessment of degradation in materials. This is a new method for health monitoring

of materials and structures that relies on the estimation of the information entropy from the AE

signals measured, for example from cyclic fatigue loading of the materials. Different ways of

calculation of the information entropy from AE signals will be explored and discussed in this

paper. It is the aim of this study to investigate the behavior of maximum entropy (𝑆𝑚𝑎𝑥) and

accumulated information entropy (𝑆𝑓) at the time of the materials’ failure over a series of fatigue

tests on Titanium alloy, the material commonly used in the structure of certain aircrafts. We

also present the potential relation between the information entropy and thermodynamic entropy.

We postulate that such a relation, if realized, can reveal a path from the underlying physics of

fatigue failure to the observable features and symptoms of damage in materials such as acoustic

emission signals.

Structures and materials under fatigue cyclic loading gradually degrade as a result of the

formation of the multiple damage mechanisms. Such damage mechanisms induce disorder in

the materials’ microstructure. The types of disorder in materials are different depending on

nature of the materials’ structure. For instance, in metals the disorder is manifested in the forms

of dislocation movements, slip formation, void nucleation and growth. Combination of these

microstructural changes also is possible and eventually leads to micro-fracture events and

micro-cracks formation.

As disorder is induced in the material as a result of damage, such microstructure changes are

traceable in the variations of generated acoustic emissions. Such variations affect the

probability distribution of the waveforms or their extracted features and can be quantified by

uncertainty metrics such as information (Shannon) entropy. The key step in the analysis is how

to define the probability distribution of the AE waveforms or their extracted features. Three

viable methods will be presented in this paper. This paper reports fatigue experiments in which

the resulting AE waves and features were measured. The paper discusses various approaches to

describe the maximum information entropy corresponding to the detected AE, and shows that

failure occurs at the same maximum entropy value regardless of the trajectory of the fatigue

damage growth and loading conditions that caused the fatigue. Such behavior has been reported

for the thermodynamic entropy. That is the failure of the material always occurs at an equivalent

level of information entropy and thermodynamic entropy.

References:

[1] Shannon, Claude Elwood. "A mathematical theory of communication." ACM SIGMOBILE

Mobile Computing and Communications Review 5.1 (2001): 3-55.

Key Words: Information Entropy, Acoustic emission, Degradation

Page 50: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

BLIND SIGNAL SEPARATION FORUNDERWATER LIDAR APPLICATIONS

David W. Illig, Luke K. Rumbaugh, Mahesh K. Banavar, William D. Jemison

Department of Electrical and Computer Engineering, Clarkson [email protected]

AbstractWe demonstrate the use of blind signal separation (BSS) to improve signal-to-interference ratio (SIR) in underwater light detection and ranging (lidar) applica-tions. Lidar systems are used for high-resolution ranging and imaging underwater.Absorption and scattering of light pose significant challenges in the underwater opti-cal channel; this work focuses on reducing the undesired “clutter” signal created bybackscatter. We use a hybrid lidar-radar approach, in which a radar-like intensity-modulation is applied to the laser. The backscatter’s frequency response decreasesat 20 dB/decade above 100 MHz, motivating use of high carrier frequencies. Weapply the principal component analysis form of BSS to multiple snapshots of under-water lidar returns: backscatter is homogeneous and appears in the first subspace;if the distance between snapshots is sufficiently small, the target does not have thecorrelation energy to be in the first subspace and thus appears in the second sub-space. The figure shows preliminary simulation results: the left shows a typical lidarreturn before and after applying BSS; the right shows gain as a function of carrierfrequency for 50 MHz linear chirps, showing the improvement after applying BSSand the expected ˜20 dB/decade slope. This result has important implications onthe design of lidar transmitters, particularly the choice of carrier frequency.

Key Words: blind signal separation, principal component analysis, clutter suppres-sion, underwater lidar, optical scattering

Page 51: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

OBJECTIVE BAYESIAN NETS FORCONSISTENT DATASETS

J. Landes1, J. Williamson1

(1) Philosophy, SECL, University of Kent, Canterbury, UK

Abstract

Problem & Results. This paper seeks tractable solutions to the following prob-lem: Given large mutually consistent datasets each measuring a subset of variablesin the domain, identify the maximum entropy probability function, P †, which agreeswith the marginal observed frequencies in these datasets. Two classes of tractableproblems are identified and solved.

Novelty. To the best of our knowledge, this is the first study to present com-putationally tractable algorithms computing P † which are tailored to this problem.Furthermore, we discover a non-trivial class of problems for which we obtain P † ina computationally simple manner, i.e., without solving an optimisation problem.

Methodology. To do this we appeal to the Bayesian net representation ofprobability. This is helpful in two respects. 1) Bayesian nets can efficiently representprobability distributions. 2) Efficient algorithms can learn a Bayesian net from adataset which represents the observed frequencies; see for example [1]. We seek aBayesian net which represents P †, called an objective Bayesian net.

Tractable Class 1. A collection of h ≥ 2 consistent datasets is centred if andonly if there exists a dataset DSm such that every variable which is measured inmore than one dataset is also measured in DSm. In particular, every collectionof two consistent datasets is centred. We show that the problem of computingan objective Bayesian net for a collection of centred datasets reduces to learningBayesian nets for each dataset separately and is hence tractable.

Tractable Class 2. First one efficiently constructs a directed acyclic graph Gof an objective Bayesian net on all variables based on Bayesian nets which werelearned for each dataset individually. We show that if G is of a general form (whichincludes the graph depicted below), then the problem of computing the conditionalprobabilities of an objective Bayesian net is tractable. Furthermore, if all vari-ables are binary, then an objective Bayesian net can be found without solving anoptimisation problem.

References:[1] Neapolitan, R.E.: Learning Bayesian Networks. Pearson (2003).

Page 52: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Relational Entropic Dynamics of Many Particles

Selman Ipek, Ariel CatichaSUNY Albany

([email protected], [email protected])

Abstract

Within the Entropic Dynamics (ED) framework quantum theory is derived usingthe tools of information theory—probability theory and entropic inference. In pre-vious work the framework has been used to derive the Schrdinger equation for manyparticles and for scalar fields. This paper presents a first step towards a compre-hensive method of tackling gauge theories in the entropic dynamics framework.The method is inspired by the relational classical dynamics developed by J. Bar-bour and coworkers as a framework designed to implement Mach’s Principles andgauge symmetries. It makes use of the technique of ”best matching” which removesthe physically irrelevant quantities induced by redundancy of description—which iswhat leads to gauge symmetries. The relational quantum ED developed here differsfrom the standard classical relational dynamics in an important way. In the clas-sical version the best-matched states are the particle configurations at successiveinstants of time. In the ED approach the best-matched states are specified by theprobability distribution and the drift potential. The gauge symmetries we considerare the global translations and rotations and the resulting constraints are that theexpected values of total momentum and total angular momentum vanish.

Key Words: Information Theory, Entropic Dynamics, Relational Dynamics, GaugeTheories

Page 53: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Friday July 24, 2015

Friday Morning

8:30 – 9:15 Registration

9:15 – 10:15 Humberto Loguercio

Functional Identities in Superstatistics

10:15 – 10:45 Sima Sharifirad

Improved Gini Index and Enhanced SMOTE by Maximum Entropy

Lunch 12:30 – 2:15

Business Meeting 2:15 – 3:15

Page 54: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

FUNCTIONAL IDENTITIES INSUPERSTATISTICS

Humberto Loguercio, Diego Gonzalez, Sergio Davis

Departamento de Fısica, Facultad de Ciencias, Universidad de Chile

Abstract

Superstatistics[1] is a generalization of Boltzmann-Gibbs statistics where themodel P (~x|ρ) is given by

P (~x|ρ) =

∫ ∞

0dβρ(β)

exp(−βH(~x))

Z(β). (1)

These kind of models allow uncertainties in the inverse temperature β = 1/kBTand are being increasingly used to describe non-equilibrium and non-extensive sys-tems. They can be seen as “imprecise MaxEnt models” where the Lagrange multi-plier is being marginalized over.

In this formalism, knowledge of the value of the parameter β is replaced byknowledge of the function ρ(β), and then it makes sense to apply the tools offunctional calculus to derive new identities.

In this work we make use of the fluctuation–dissipation theorem and a re-cently derived[2] identity to obtain new relations connecting expectations in theBoltzmann-Gibbs model with the corresponding expectations in Superstatistics.Additionally, a formal method is presented, for recovering ρ(β) given P (~x|ρ) us-ing MaxEnt.

References:[1] C. Beck, E. G. D. Cohen, Physica A 322, 267-275 (2003).[2] S. Davis, G. Gutierrez, Phys. Rev. E 86, 051136 (2012).

Key Words: SUPERSTATISTICS

Page 55: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Improved Gini Index and Enhanced SMOTE by Maximum

Entropy

Sima Sharifirad, Mehdi Ghatee

Faculty of mathematics and computer science, AmirKabir University of Technology, Tehran, Iran

Classification of data becomes difficult specifically when data is imbalanced. SMOTE is one of the pre-

processing and oversampling solutions for imbalanced data. In this article, Improved Gini index is used as

feature selection and maximum entropy is used to enhance SMOTE . Results are compared by different

feature selection methods and other pre-processing methods. Initially, this method is tested as a case study

on Tehran-Bazargan highway dataset with IR equal to 36 then results are compared by 1NN and C4.5

with previous methods and after that on 10 datasets from KEEL repository. Results prove the superiority

of our method over previous methods.

Keyword: imbalanced data, imbalanced ratio (IR), synthetic minority oversampling technique (SMOTE).

Page 56: 35th International Workshop on Bayesian Inference and Maximum … · 2015-07-20 · 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

MaxEnt 2015 Program Layout

19 20 21 22 23 24

SUNDAY MONDAY TUESDAY WEDNESDAY THURSDAY FRIDAY

8:30

8:45 Registration Registration Registration Registration Registration Registration

9:00

9:15 Welcome Invited Invited Invited

9:30 Tutorial 1_1 Speaker Speaker Speaker Speaker Loguercio

9:45 A. Giffin Invited Dixit

10:00 Speaker Urniezius Niven Sharifirad

10:15 Break N. Caticha

10:30 Tutorial 1_2 Davis A. Caticha Niven (x)

10:45 A. Giffin setup time setup time setup time setup time setup time

11:00

11:15 Break Break Break Break Break

11:30 Break

11:45 Tutorial 2_1 Kalaidzidis Earle Takada Kahirdeh (x)

12:00 K. H. Knuth

12:15 Daum González Takada Rumbaugh (x)

12:30 Break setup time setup time setup time setup time setup time

12:45 Tutorial 2_2

1:00 K. H. Knuth

1:15 Lunch Lunch Lunch Lunch Lunch

1:30

1:45

2:00 Lunch

2:15 Invited Invited

2:30 Tutorial 3_1 Speaker Speaker van Erp Landes Business

2:45 U. v. Toussaint Gencaga A. Caticha Meeting

3:00 van Erp Ipek

3:15 Break setup time setup time setup time setup time

3:30 Tutorial 3_2

3:45 U. v. Toussaint Break Break Break Break

4:00

4:15 Knuth Stokes Henderson Break

4:30 Break

4:45 Walsh Waldrip von Toussaint

5:00

5:15 Break

5:30

5:45

6:00 Reception

--- Poster Excursion

--- Session I Banquet

---

---

9:00

10:00