# 7. Exercise sheet#

Some general remarks about the exercises:

• For your convenience functions from the lecture are included below. Feel free to reuse them without copying to the exercise solution box.

• For each part of the exercise a solution box has been added, but you may insert additional boxes. Do not hesitate to add Markdown boxes for textual or LaTeX answers (via Cell > Cell Type > Markdown). But make sure to replace any part that says YOUR CODE HERE or YOUR ANSWER HERE and remove the raise NotImplementedError().

• Please make your code readable by humans (and not just by the Python interpreter): choose informative function and variable names and use consistent formatting. Feel free to check the PEP 8 Style Guide for Python for the widely adopted coding conventions or this guide for explanation.

• Make sure that the full notebook runs without errors before submitting your work. This you can do by selecting Kernel > Restart & Run All in the jupyter menu.

• For some exercises test cases have been provided in a separate cell in the form of assert statements. When run, a successful test will give no output, whereas a failed test will display an error message.

• Each sheet has 100 points worth of exercises. Note that only the grades of sheets number 2, 4, 6, 8 count towards the course examination. Submitting sheets 1, 3, 5, 7 & 9 is voluntary and their grades are just for feedback.

NAME = ""
NAMES_OF_COLLABORATORS = ""


Exercise sheet 7

Code from the lectures:

import numpy as np
rng = np.random.default_rng()
import matplotlib.pylab as plt
%matplotlib inline

def potential_v(x,lamb):
'''Compute the potential function V(x).'''
return lamb*(x*x-1)*(x*x-1)+x*x

def neighbor_sum(phi,s):
'''Compute the sum of the state phi on all 8 neighbors of the site s.'''
w = len(phi)
return (phi[(s+1)%w,s,s,s] + phi[(s-1)%w,s,s,s] +
phi[s,(s+1)%w,s,s] + phi[s,(s-1)%w,s,s] +
phi[s,s,(s+1)%w,s] + phi[s,s,(s-1)%w,s] +
phi[s,s,s,(s+1)%w] + phi[s,s,s,(s-1)%w] )

def scalar_action_diff(phi,site,newphi,lamb,kappa):
'''Compute the change in the action when phi[site] is changed to newphi.'''
return (2 * kappa * neighbor_sum(phi,site) * (phi[site] - newphi) +
potential_v(newphi,lamb) - potential_v(phi[site],lamb) )

def scalar_MH_step(phi,lamb,kappa,delta):
'''Perform Metropolis-Hastings update on state phi with range delta.'''
site = tuple(rng.integers(0,len(phi),4))
newphi = phi[site] + rng.uniform(-delta,delta)
deltaS = scalar_action_diff(phi,site,newphi,lamb,kappa)
if deltaS < 0 or rng.uniform() < np.exp(-deltaS):
phi[site] = newphi
return True
return False

def run_scalar_MH(phi,lamb,kappa,delta,n):
'''Perform n Metropolis-Hastings updates on state phi and return number of accepted transtions.'''
total_accept = 0
for _ in range(n):
total_accept += scalar_MH_step(phi,lamb,kappa,delta)

def batch_estimate(data,observable,k):
'''Devide data into k batches and apply the function observable to each.
Returns the mean and standard error.'''
batches = np.reshape(data,(k,-1))
values = np.apply_along_axis(observable, 1, batches)
return np.mean(values), np.std(values)/np.sqrt(k-1)


## 7.1. Lattice scalar field & heatbath algorithm#

(100 points)

Goal: Implement and test the heatbath algorithm for the lattice scalar field simulation.

As in the lecture we consider the scalar field $$\varphi : \mathbb{Z}_w^4 \to \mathbb{R}$$ on the $$w^4$$ lattice with periodic boundary conditions (here we use the notation $$\mathbb{Z}_w = \{0,1,\ldots,w-1\}$$ for the integers modulo $$w$$) and lattice action

$S_L[\varphi] = \sum_{x\in\Lambda}\left[ V(\varphi(x)) - 2\kappa \sum_{\hat{\mu}}\varphi(x)\varphi(x+\hat{\mu})\right],\qquad V(\varphi) = \lambda (\varphi^2-1)^2+\varphi^2,$

where the second sum is over the 4 unit vectors $$\hat{\mu} = (1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1)$$.

In the lecture notes the Metropolis-Hastings algorithm for local updates to $$\phi$$ has been implemented to sample from the desired distribution $$\pi(\varphi) = \frac{1}{Z} e^{-S_L[\varphi]}$$. As in the case of the Ising model this local updating suffers from critical slowing down when close to the symmetry-breaking transition. So one should always investigate alternative Monte Carlo updates. One of these is the heatbath algorithm that we have already discussed in the setting of the harmonic oscillator. It entails selecting a uniform site $$x_0\in \mathbb{Z}_w^4$$ and sampling a new random value $$Y$$ for $$\varphi(x_0)$$ with density proportional to $$f_Y(y)\propto\pi(\varphi(x)|_{\varphi(x_0)=y})$$ on $$\mathbb{R}$$. To sample such a $$y$$ we will make use of acceptance-rejection sampling (recall the lecture of week 3).

(a) Show with a calculation, to be included below in markdown and LaTeX, that for any choice of $$c>0$$ we have

$f_Y(y)\propto \exp\left( - c (y - s/c)^2 - \lambda (y^2 - v)^2 \right),$

where $$s$$ and $$v$$ are given by

$s := \kappa \sum_{\hat{\mu}} (\varphi(x_0+\hat{\mu}) + \varphi(x_0-\hat{\mu})), \qquad v := 1 + \frac{c-1}{2\lambda}.$

(15 pts)

(b) Implement acceptance/rejection sampling of the random variable $$Y$$ with pdf $$f_Y(y)\propto\exp\left( - c (y - s/c)^2 - \lambda (y^2 - v)^2 \right)$$. You may treat $$c, s, \lambda$$ as given parameters for this. Test your sampling by producing a histogram for $$\lambda = 3$$, $$s = 0.3$$ and three different values for $$c$$ (e.g. 0.5, 1.0, 2.0). Hint: use proposal distribution $$f(y) = \sqrt{\frac{c}{\pi}}\exp\left( - c (y - s/c)^2\right)$$ and appropriate acceptance probability $$A(y)$$. (25 pts)

def sample_Y(s,lamb,c):
'''Sample Y width density f_Y(y).'''
raise NotImplementedError()

from nose.tools import assert_almost_equal
assert_almost_equal(np.mean([sample_Y(0.8,1.5,2.0) for _ in range(800)]),0.71,delta=0.06)
assert_almost_equal(np.mean([sample_Y(-0.7,0.5,0.6) for _ in range(800)]),-0.60,delta=0.06)

# Plotting
raise NotImplementedError()


(c) We still have a free parameter $$c>0$$ that we can choose as we see fit. We would like to choose $$c$$ such that the acceptance probability $$\mathbb{P}(\text{accept})=\int_{-\infty}^{\infty} \mathrm{d}y f(y) A(y)$$ is maximized. Show by a calculation, to be included in markdown and LaTeX below, that $$c$$ must then be a positive root of the polynomial

$-c^3 + (1 - 2 \lambda)c^2 + \lambda c + 2 s^2 \lambda = 0.$

Hint: evaluate $$\frac{d}{d c}f(y) A(y)$$ and set it equal to $$0$$. (10 pts)

In fact, it can be easily shown that the polynomial above has only a single positive root when $$\lambda>0$$, so this uniquely determines an optimal $$c$$. Computing it accurately can be costly, but it is possible to obtain a good approximation based on a Taylor series estimate, which yields:

def approx_optimal_c(s,lamb):
'''Compute a value of c that is close to a positive root of the polynomial above.'''
u = np.sqrt(1+4*lamb*lamb)
return ((3 + 3*u*(1-2*lamb)+4*lamb*(3*lamb-1)
+ np.sqrt(16*s*s*(1+3*u-2*lamb)*lamb +(1+u-2*u*lamb+4*lamb*lamb)**2)) /
(2+6*u-4*lamb))

def sample_Y_optimal(s,lamb):
c = approx_optimal_c(s,lamb)
return sample_Y(s,lamb,c)

approx_optimal_c(0.3,3)


(d) Implement the MCMC update using the heatbath with (approximately) optimal $$c$$. (10 pts)

def heatbath_update(phi,lamb,kappa):
'''Perform a random heatbath update on the state phi (no return value necessary).'''
raise NotImplementedError()

def run_scalar_heatbath(phi,lamb,kappa,n):
'''Perform n heatbath updates on state phi.'''
for _ in range(n):
heatbath_update(phi,lamb,kappa)


(e) Gather $$800$$ samples of $$|m| = |w^{-4} \sum_{x\in\mathbb{Z}_w^4} \varphi(x)|$$ for $$w=3$$, $$\lambda = 1.5$$ and $$\kappa = 0.08, 0.09, \ldots, 0.18$$ for the heatbath algorithm as well as for the Metropolis-Hastings algorithm (width $$\delta=1.5$$) from the lecture. Store your data in the HDF5-file scalarfield.hdf5. You may use $$500$$ sweeps for equilibration $$1$$ sweep in between measurements. (15 pts)

# Gathering and storing data
raise NotImplementedError()


(f) Read your data from scalarfield.hdf5 and :

• plot your estimates for $$\mathbb{E}[|m|]$$ with error bars obtained from the heatbath and Metropolis algorithm for comparison;

• plot only the errors in your estimate so you can compare their magnitude for both algorithms;

• plot the estimated autocorrelation time in units of sweeps for both algorithms.

Where do you see an improvement? (25 pts)

# Reading data and plotting