Révision datée du 27 juin 2019 à 22:36 par Marmot (discussion | contributions) (→Instructions for building a MDP model)
Rescom TP instructions
De MARMOTE
Instuctions for the Lab on Markov chain modeling and MDP analysis, at the RESCOM2019 summer school
Sommaire
Objective
The goals of the Lab session is
- program the model of a discrete-time queue with impatience, batch size and finite capacity, using the marmoteCore library;
- program the same model with a control of admission in service, with the marmoteMDP library;
- compute the optimal service policy in this queue.
Steps
Preparation
The first step is to have the library installed on your computer. Two possibilities:
- using a virtual machine with virtualbox
- download VM + instructions from this page
- copy it from USB drive, available at the conference.
- using the compiled library (linux only)
- tarball + instructions from marmoteCore's site
The instructions with virtualbox are then:
- install the virtualbox software from its web site
- launch virtualbox
- click on "Machine > Add"
- enter the location of the virtual machine that has been downloaded
- select the VM (Rescom2019_TP) in the right-hand pane and click on "Start"
- log in with username/password pierre/Rescom2019*
- you should see a desktop with two folders: TP_Marmote and TP_MDP
Instructions for building Markov Chains
Testing the example provided
- Click on the TP_Marmote folder
- click on the file "example1.cpp" (or right-click then select "geany")
- a command-line terminal should appear at the bottom. Type
./example1
Example 1: construction of a discrete-time Markov chain on a 3-state space.
- the program takes as arguments:
- n, a number of steps
- p1 p2 p3, three probabilities summing up to 1, representing the initial distribution
- it outputs
- the probability transition matrix
- a trajectory x[0], x[1], ... x[n]
- run the example with values, e.g.
./example1 4 0.2 0.3 0.5
- use the editor to modify the code example1.cpp, in order to make state 2 absorbing
- compile by clicking "Construire > Make"
- execute again
- modify further the code to make it compute the value of the distribution after n steps:
Distribution* trDis = c1->TransientDistributionDT( 0, n );
trDis->Write( stdout, STANDARD_PRINT_MODE );
- compile and execute
Developing a new model
- download/copy the skeleton of the program from here.
- copy/rename it as main.cpp (overwite the present one)
The file contains:
- the "include" instructions necessary
- a "combinations" and "binomial" useful for computing some probabilities
- the template of a "MakeGenerator" function returning a matrix (type SparseMatrix)
- a main() function.
- modify the code of MakeGenerator( batchDistrib, batchSize, bufferSize ) to have it create the transition matrix of the queue with
- arrivals in batches distributed according to batchDistrib
- one server serving batches of size batchSize
- buffer + server capacity = bufferSize
The object DiscreteDistribution:
- represents a discrete distribution on some finite set of values
- is created as
new DiscreteDistribution( n, values, probas );
- where n is the number of values, and the two other parameters are arrays of double.
- has member functions
- nb_vals() for getting the number of values
- batchDistrib->values() for getting the array of values
- batchDistrib->probas() for getting the array of probabilities
- Mean() for computing its average
- Write( stdout, STANDARD_PRINT_MODE ) for displaying it on the terminal.
- in the main(), create a DiscreteDistribution object for the distribution:
- with values 0, 1, 2, 3, 4,
- with probabilities 1/2, 1/4, 1/8, 1/16, 1/16
- adapt the code from example1.cpp to create a SparseMatrix object with MakeGenerator, then a Markov chain
- simulate the chain for some time
- compute its stationary distribution with the MarkovChain method:
Distribution* StationaryDistributionDT();
- then write it to the terminal.
Instructions for building a MDP model
- download/copy the skeleton of the program from here.
- adapt the code of the file MDP_jouet.cpp in order to:
- create in the object P1 the transition matrix of a queue with batch service (reuse the code of the previous exercise)
- create in the object P2 the transition matrix of the same queue, but without any service
- create in the object R1 the matrix of rewards/costs for each pair (state,action)
- create the discountedMDP object from these elements
- solve the MDP problem with the discountedMDP method
solutionMDP* valueIteration( epsilon, maxIter )
- where epsilon is some precision parameter and maxIter the maximum number of iterations
- write the result to the terminal with the writeSolution() method
- do the same with the discountedMDP method policyIterationModified( epsilon, maxIter, value, period )
where value = 0.01 and period = 100 - compile and execute