Miind
Workflow

Introduction

There are two work flows:

Python Workflow

In the first case you will not have to deal with C++ directly and even experienced C++ programmers may find the XML files easier to handle than the corresponding C++ programs. It is certainly recommended for a first try, even if you intend to use C++.

We suggest that you install MIIND in a local directory. MIIND is small. The directory containing the miind code is called the MIIND_ROOT. If you have worked through the installation successfully (see Installation), you will have a top directory for miind in the location where you installed it, and withing this directory an apps, libs, examples and a python directory. You will also have created a build directory yourself. Upon successful compilation, the build directory itself will contain an apps directory, that contains a BasicDemos directory that contains executables that directly correspond to the source files in the apps/BasicDemos directory under the MIIND_ROOT.

Now create a working directory. It is recommended that you create this directory outside the MIIND installation, ideally in a place where you have back up. Now copy the three files present in MIIND_ROOT/example/single_lif to your working directory. Then give the command:

miind.py --d single response.xml 

. The following now happens:

You could simple type

make 

in the MIIND_ROOT/build directory and you would see that the new program will be compiled. It is easier to type:

submit.py single 

. Note that 'single' is the name we used in the first step. You can choose your another name but must use it consistently, as this name will be part of MIIND's build structure. This command achieves the following:

By issuing the command

miind.py --r --d single 

the single directory is removed from the build chain. Neither the executables, nor the simulation results are removed by this command; you have to do this by hand. This entails removing the directory single from MIIND_ROOT/apps, MIIND_ROOT/build/apps and MIIND_ROOT/jobs.

Submitting Batch Jobs

Parameter sweeps tend to result in a large number of executables that you want to run in parallel. There is an alternative to

submit.py 

that works much the same, but rather than running the executables in sequence,

launch.py 

submits batch jobs for each executable. Unlike submit.py, launch.py does not use a python script to execute the newly created executables, but it calls a python script which in turns calls a shell script that effectuates a batch submission of the executable. The shell script is in MIIND_ROOT/python and is called submit.sh. This shell script is particular to our HPC cluster, and will need adapting to your own HPC environment. It is unlikely to work straight out of the box, but we are willing to help, if you can provide details on the submission system on your HPC cluster.

Analyzing the results

Running the miind executable will produce a .root file. You can analyse the results conveniently in Python, using either the ROOT objects directly, or converting them into numpy objects that can be analysed in numpy, scipy and visualized with Matplotlib.

Copy the root file to a directory of your choice. Make sure that the PYTHONPATH variable is set to pick up the ROOT module.

Open a Python shell in your favouring Python environment and type:

import ROOT

If this does not generate errors, you are in business. Now open the file:

f=ROOT.TFile('yourfile.root')

You can inspect the file:

f.ls()

You will get a list of names of TGraph objects. To get them from the file, do, for example:

g=f.Get('rate_1')
g.Draw('AP')

A canvas with the firing rate graph should now pop up.

ROOT is a very powerful analysis environment, with more extensive visualization capabilities than Matplotlib, but nothing prevents you from using SciPy for your analysis. Consider a ROOT.TGraph object with name 'data':

1 x_buff = data.GetX()
2 y_buff = data.GetY()
3 N = data.GetN()
4 x_buff.SetSize(N)
5 y_buff.SetSize(N)
6 # Create numpy arrays from buffers, copy to prevent data loss
7 x_arr = np.array(x_buff,copy=True)
8 y_arr = np.array(y_buff,copy=True)

Here is a brief introduction to PyROOT: https://www-zeuthen.desy.de/~middell/public/pyroot/pyroot.html.

Advanced C++ Workflow

Again, consider the MIIND directory structure:

org.pdf

If you are comfortable with C/C++ it may well be possible that you want to write your own code based on the MIIND API. It is relatively straightforward to create your own simulations by copying existing code and adding the copies as new files into the compilation tree. MIIND uses CMake, and in each directory of Fig. fig-cmake} you will find a directory CMakeLists.txt. Consider the directory BasicDemos under MIIND_ROOT/apps. It contains the file population-example.cpp. Copy this file to new.cpp. Now edit the file CMakeLists.txt in the BasicDemos directory. Under the section # executables you will find entries such as the following:

add_executable(populationDemo population-example.cpp)
target_link_libraries(populationDemo \${LIBLIST})

Copy this entry in the same file under the same section using an editor and change this entry into:

add_executable(myProgram new.cpp)
target_link_libraries(myProgram \${LIBLIST})

This is sufficient to add the new program to the build structure. Typing

make 

in the build directory will cause a new Makefile to be generated, which subsequently will cause the new.cpp to be compiled and linked into a new executable. Of course this program will do exactly the same thing as population-example.cpp, until you start making modifications to the code. When you do this, the full MIIND API is at your disposal. Earlier in the CMakeLists.txt you will find entries that ensure that your code will find MIIND's header files:

include_directories( ../../libs/MPILib )
include_directories( ../../libs/GeomLib )

include_directories( \${GSL_INCLUDE_DIRS} )
link_directories( \${GSL_LIBRARY_DIRS} )
include_directories( \${ROOT_INCLUDE_DIRS} )
link_directories( \${ROOT_LIBRARY_DIRS} )