Rescom TP instructions : Différence entre versions

De MARMOTE
(1. Preparation)
(2. Instructions for building Markov Chains)
Ligne 28 : Ligne 28 :
 
* you should see a desktop with two folders: TP_Marmote and TP_MDP
 
* you should see a desktop with two folders: TP_Marmote and TP_MDP
  
== 2. Instructions for building Markov Chains ==
+
=== 2. Instructions for building Markov Chains ===
  
 
* Click on the TP_Marmote folder
 
* Click on the TP_Marmote folder
 
* click on the file "example1.cpp" (or right-click then select "geany")
 
* click on the file "example1.cpp" (or right-click then select "geany")
* a command-line terminal should appear at the bottom
+
* a command-line terminal should appear at the bottom. Type
<code> toto </code>
+
<code style="color:green;background-color:black">
 +
./example1
 +
</code>
 +
 
 +
<blockquote style="color:purple">
 +
Example 1: construction of a discrete-time Markov chain on a 3-state space.
 +
* the program takes as arguments:
 +
** n, a number of steps
 +
** p1 p2 p3, three probabilities summing up to 1, representing the initial distribution
 +
* it outputs
 +
** the probability transition matrix
 +
** a trajectory x[0], x[1], ... x[n]
 +
</blockquote>
 +
 
 +
* run the example with values, e.g.
 +
<code style="color:green;background-color:black">
 +
./example1 4 0.2 0.3 0.5
 +
</code>
 +
* use the editor to modify the code example1.cpp, in order to make state 2 absorbing
 +
* compile by clicking "Construire > Make"
 +
* execute again
 +
* modify further the code to make it compute the value of the distribution after n steps:
 +
<code style="color:black;background-color:cyan>
 +
  Distribution* trDis = c1->TransientDistributionDT( 0, n );
 +
  trDis->Write( stdout, STANDARD_PRINT_MODE );
 +
</code>

Version du 27 juin 2019 à 20:47

Instuctions for the Lab on Markov chain modeling and MDP analysis, at the RESCOM2019 summer school

Objective

The goals of the Lab session is

  • program the model of a discrete-time queue with impatience, batch size and finite capacity, using the marmoteCore library;
  • program the same model with a control of admission in service, with the marmoteMDP library;
  • compute the optimal service policy in this queue.

Steps

1. Preparation

The first step is to have the library installed on your computer. Two possibilities:

  • using a virtual machine with virtualbox
  • using the compiled library (linux only)

The instructions with virtualbox are then:

  • install the virtualbox software from its web site
  • launch virtualbox
  • click on "Machine > Add"
  • enter the location of the virtual machine that has been downloaded
  • select the VM (Rescom2019_TP) in the right-hand pane and click on "Start"
  • log in with username/password pierre/Rescom2019*
  • you should see a desktop with two folders: TP_Marmote and TP_MDP

2. Instructions for building Markov Chains

  • Click on the TP_Marmote folder
  • click on the file "example1.cpp" (or right-click then select "geany")
  • a command-line terminal should appear at the bottom. Type

./example1

Example 1: construction of a discrete-time Markov chain on a 3-state space.

  • the program takes as arguments:
    • n, a number of steps
    • p1 p2 p3, three probabilities summing up to 1, representing the initial distribution
  • it outputs
    • the probability transition matrix
    • a trajectory x[0], x[1], ... x[n]
  • run the example with values, e.g.

./example1 4 0.2 0.3 0.5

  • use the editor to modify the code example1.cpp, in order to make state 2 absorbing
  • compile by clicking "Construire > Make"
  • execute again
  • modify further the code to make it compute the value of the distribution after n steps:

 Distribution* trDis = c1->TransientDistributionDT( 0, n );
 trDis->Write( stdout, STANDARD_PRINT_MODE );