Révision datée du 27 juin 2019 à 21:57 par Marmot (discussion | contributions) (→Developing a new model)
Rescom TP instructions
De MARMOTE
Instuctions for the Lab on Markov chain modeling and MDP analysis, at the RESCOM2019 summer school
Sommaire
Objective
The goals of the Lab session is
- program the model of a discrete-time queue with impatience, batch size and finite capacity, using the marmoteCore library;
- program the same model with a control of admission in service, with the marmoteMDP library;
- compute the optimal service policy in this queue.
Steps
Preparation
The first step is to have the library installed on your computer. Two possibilities:
- using a virtual machine with virtualbox
- download VM + instructions from this page
- copy it from USB drive, available at the conference.
- using the compiled library (linux only)
- tarball + instructions from marmoteCore's site
The instructions with virtualbox are then:
- install the virtualbox software from its web site
- launch virtualbox
- click on "Machine > Add"
- enter the location of the virtual machine that has been downloaded
- select the VM (Rescom2019_TP) in the right-hand pane and click on "Start"
- log in with username/password pierre/Rescom2019*
- you should see a desktop with two folders: TP_Marmote and TP_MDP
Instructions for building Markov Chains
Testing the example provided
- Click on the TP_Marmote folder
- click on the file "example1.cpp" (or right-click then select "geany")
- a command-line terminal should appear at the bottom. Type
./example1
Example 1: construction of a discrete-time Markov chain on a 3-state space.
- the program takes as arguments:
- n, a number of steps
- p1 p2 p3, three probabilities summing up to 1, representing the initial distribution
- it outputs
- the probability transition matrix
- a trajectory x[0], x[1], ... x[n]
- run the example with values, e.g.
./example1 4 0.2 0.3 0.5
- use the editor to modify the code example1.cpp, in order to make state 2 absorbing
- compile by clicking "Construire > Make"
- execute again
- modify further the code to make it compute the value of the distribution after n steps:
Distribution* trDis = c1->TransientDistributionDT( 0, n );
trDis->Write( stdout, STANDARD_PRINT_MODE );
- compile and execute
Developing a new model
- download/copy the skeleton of the program from here.
- copy/rename it as main.cpp (overwite the present one)
The file contains:
- the "include" instructions necessary
- a "combinations" and "binomial" useful for computing some probabilities
- the template of a "MakeGenerator" function returning a matrix (type SparseMatrix)
- a main() function.
- modify the code of MakeGenerator( batchDistrib, batchSize, bufferSize ) to have it create the transition matrix of the queue with
- arrivals in batches distributed according to batchDistrib
- one server serving batches of size batchSize
- buffer + server capacity = bufferSize
The object DiscreteDistribution:
- represents a discrete distribution on some finite set of values
- is created as
new DiscreteDistribution( n, values, probas );
where n is the number of values, and the two other parameters are arrays of double.
- has member functions
- nb_vals() for getting the number of values
- batchDistrib->values() for getting the array of values
- batchDistrib->probas() for getting the array of probabilities
- Mean() for computing its average
- Write( stdout, STANDARD_PRINT_MODE ) for displaying it on the terminal.
- in the main(), create a DiscreteDistribution object for the distribution:
- with values 0, 1, 2, 3, 4,
- with probabilities 1/2, 1/4, 1/8, 1/16, 1/16
- adapt the code from example1.cpp to create a SparseMatrix object with MakeGenerator, then a Markov chain
- simulate the chain for some time
- compute its stationary distribution with the MarkovChain method:
Distribution* StationaryDistributionDT();