SciELO - Scientific Electronic Library Online

vol.38 número4Simulation of deacidification process by molecular distillation of deodorizer distillateBaro-diffusion and thermal-diffusion in a binary fluid mixture confined between two parallel discs in presence of a small axial magnetic field índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados




  • No hay articulos citadosCitado por SciELO

Links relacionados

  • No hay articulos similaresSimilares en SciELO


Latin American applied research

versión impresa ISSN 0327-0793

Lat. Am. appl. res. v.38 n.4 Bahía Blanca oct. 2008


Evaluation and selection of discrete-event simulation software for the oil industry

M. Alvarez1, G. Rincón2, M. Pérez2 and S. Hernández3

1 AddValue, C.A, CCS-84356 PO BOX 025323, Caracas 84356. Miami, Florida 33102-5323

2 Dto. Procesos y Sistemas, Univ. Simón Bolívar. PO BOX 89000, Caracas 1080-A, Venezuela;

3 PDVSA INTEVEP, S.A. Dto. Refinación y Comercio. PO BOX 76343. Caracas 1070-A. Venezuela.

Abstract — The selection of a Discrete Event Simulation Software (DESS) that best meets the needs of a given organization is not an easy task. The purpose of the present work was to develop a systematic approach, applicable to any industry, for the evaluation and selection of a suitable DESS. Based on similar approaches from different authors, this method adds 40 criteria and over 100 subcriteria, provided by the Systemic Quality Specification Model (SQMO+) for a comprehensive software evaluation. The method developed herein was specifically applied to the logistics area of the state owned Venezuelan Oil Company. Among 44 initial options, the application of the proposed methodology allowed the pre-selection of 4 suitable software that complied with the minimum technical requirements. Final ranking and selection was achieved through a comprehensive sensitivity analysis that lead to an objective and well supported purchase recommendation.

Keywords — Discrete-event. Simulation. Evaluation. Selection. Oil Company. Decision Making.


The intensive use of simulation techniques has brought into the market the introduction of an extensive variety of high quality simulation software with diverse characteristics and purposes (Nikoukaran et al., 1998). This applies as well to Discrete-Event Simulation Software (DESS), rendering necessary a selection framework in which multiple variables can be considered (Hlupic, 1999; Schaefer and Kirwood, 2003).

Improperly selected simulation software may result in unfortunate operational and/or strategic decisions, with subsequent economic losses to the organization. Simulation professionals face the challenge of selecting the most appropriate simulation software for their particular needs and application area. In the last decade, several approaches have been proposed to aid users to achieve this task (Hlupic, 1999; Nikoukaran et al., 1998; Schaefer and Kirwood, 2003; Tewoldeberhan et al., 2002). In order to successfully select a given product (software, in this case) a list of attributes must be established, where attributes refer to the features that the evaluated software should or should not have (Bass et al., 1998). Once these attributes have been identified, a formal evaluation process leads to the option most preferred by the decision-makers.

Hlupic (1999) and Nikoukaran et al. (1998) developed specific attribute frameworks for DESS evaluation. However, Tewolderberhan et al. (2002) warned that the number of studies devoted to develop an effective software evaluation and selection process, adapted to the needs of organizations interested in software acquisition, is very limited. Therefore, there is an actual need for methodologies that ensure a rational, objective and efficient evaluation to support software selection decisions.

The objective of the present work is to propose a methodology to evaluate and select Discrete Event Simulation Software (DESS). The proposed method was applied to the logistics of the state owned Venezuelan Oil Company. The first part of this paper exhaustively explains the methodology proposed to evaluate and select DESS. The second section exemplifies how the methodology was applied.


With the purpose of evaluating and selecting the DESS that best meets the requirements of a given organization, a methodology that covers the key elements for the decision making stage is proposed.

The first step of the methodology involves the appointment of two multidisciplinary work-teams; that is, the Analysis and Selection Team and the Expert Team. The members of the first team are analysts in the simulation area and end-users. They are responsible for the software evaluation and selection, and its subsequent implementation in the organization. The Expert Team includes users, experts and consultants who work in the particular business sector and the simulation field. Their aim is to support the work of the Analysis and Selection Team in all the stages of the process.

According to the methodology, both teams must hold collective working meetings. The integration of both teams seeks consensus in those activities that by their very nature are highly subjective. Once the work-teams are chosen, they execute the steps as explained in the following sections.

A. General Objectives Definition

The definition of the general objectives of the project is of primary importance in order to ensure that the results of the evaluation and selection process comply with the requirements of the organization and end users. The Analysis and Selection Team carries out this step.

The team defines three types of general objectives: 1) application area and use; 2) particular aims of the organization, and 3) required software characteristics.

Definition of objectives 1 and 2 are straightforward and will be illustrated in the application presented in Section III. Objective 3 has a much wider range and complexity; thus, it is necessary to detail the characteristic attributes of this kind of software, and develop an appropriate structure that allows its evaluation.

In order to identify the required software characteristics the Systemic Quality Specification Model (SQMO+) is applied (Rincon et al., 2005). This model specifies the systemic quality of the discrete-event simulation software and is modeled upon the ISO / IEC 9126 norms. Figure 1 shows the levels that comprise the SQMO+ model. As seen in Fig.1, the model contains 4 main levels: categories, characteristics, criteria and subcriteria.

Figure 1. Diagram of SQMO+ .

Level 1: Categories. For this level, SQMO+ propounds three categories which Mendoza et al. (2002) defined as follows:

Functionality, which refers to the capacity of the software to provide specific functions. This is the most important category in quality evaluation; thus, its inclusion is of compulsory nature.

Usability, which refers to features of the software that provide a user friendly interface, that can be easily understood and learnt by the user under specific conditions.

Efficiency, which refers to the ability of the software to provide a consistent performance in relation to the computational resources available.

Level 2: Characteristics. Each category includes a set of characteristics, as follows:

For the evaluation of the functionality category:

Fit to Purpose (FPU), which assesses the software's capability to provide a proper set of features according to the user-specific tasks and goals.

Interoperability (INT), which evaluates the software's ability to interact with one or more systems.

Security (SEC), which evaluates the software's capability to protect information from access by unauthorized personnel.

For the evaluation of the usability category:

Ease of understanding and learning (EUL), which assesses how easy the software is understood and used. It also evaluates the features that enable the user to learn the application.

Graphical Interface (GIN), which is associated to those software attributes that render it more attractive for the user.

Operability (OPR), which evaluates the software's capability to enable the user to operate and control it.

For the evaluation of the efficiency category:

Execution Performance (EPE), used to assess the software's capability to provide proper responses and processing times under specific conditions.

Resources Utilization (RUT), aimed to evaluate the software's proper use of the computational resources available under specific conditions.

Level 3: Criteria. Each of SQMO+'s characteristics has a set of criteria, which sum a total of 40 for the whole model (Rincon et al., 2005).

Functionality: 22 criteria evaluate the Fit to Purpose characteristic, 3 criteria evaluate Interoperability, and 1 criterion evaluates Security.

Usability: Ease of Understanding and Learning is evaluated by 5 criteria, 2 criteria evaluate Graphical Interface, and Operability is evaluated by 3 criteria.

Efficiency: 2 criteria evaluate the Execution Performance characteristic, and Resources Utilization is evaluated by 2 criteria.

Level 4: Subcriteria. This level comprises 133 subcriteria for the quality evaluation of the 40 previously defined criteria (Rincon et al, 2005). The subcriteria correspond to the attributes used to measure the quality of the software.

It has to be pointed out that the denominations "criteria" and "subcriteria" were chosen arbitrarily for the headings of levels 3 and 4.

Several of the criteria and subcriteria proposed in SQMO+ were based on the work of Nikoukaran et al. (1998) and Hlupic (1999). However, in order to produce high quality detail and develop a model applicable to any industry, these criteria were completed with the information gathered in the open literature and the different software's technical documentation.

For the development of these criteria, it was necessary not only to study the generic attributes that a software should have, but also the attributes that a DESS, amenable to be applied to diverse business types, should have. Table 1 shows some examples of these criteria and subcriteria. This level of detail is what differentiates the proposed SQMO+ from other models available in the literature to evaluate DESS.

Table 1. Criteria and Subcriteria Examples

B. Performance Subcriteria Classification

At this stage, the SQMO+ subcriteria are classified into mandatory or non-mandatory ones, and the importance-weight and evaluation scales of the non-mandatory subcriteria are established.

A mandatory attribute is considered of compulsory compliance to guarantee a successful outcome. Non-mandatory attributes are defined as those that offer a sense of how the alternatives perform relative to each other (Kepner and Tregoe, 1981). The importance-weights grade the relative significance of the metrics.

To classify the subcriteria into mandatory and non-mandatory and to define the importance-weight of the latter, each member of the Expert Team should complete a questionnaire. Subsequently, the Analysis and Selection Team processes and interprets the questionnaires. The importance-weight of the non-mandatory subcriteria is established using a five-point scale widely applied in questionnaires. This scale assigns 1 point to the least important and 5 points to the most important subcriteria.

Next, the Analysis and Selection Team designs the numerical evaluation scale to assess the software's capacity to satisfy the non-mandatory subcriteria. During the software evaluation, the final values assigned to each subcriterion will depend on the end user's satisfaction level, and are measured through answers to SQMO+ predesigned questions which are converted into this numerical scale. Table 2 presents an example of some evaluation scales developed in this work.

Table 2. Evaluation Scales

C. Screening

In order to save time and money, the Analysis and Selection Team performs a preliminary screening among the software options available in the market. This procedure permits to reduce the available number of software options to be evaluated to a manageable amount or Short List.

First, a list, known as the long list (LL,) of the different DESS available in the market is elaborated. Next, those DESS that comply with the first and second general objectives are identified and selected to conform the medium list (ML). Following, the licensor companies of the DESS in the ML are contacted in order to gather technical information (user's manual and demonstrative version, if available.)

Based on this information, the presence of the mandatory subcriteria in the software is verified. The lack of just one of those subcriteria causes the rejection of the DESS. Thus, only those DESS that include all the mandatory criteria qualify and conform the final Short List (SL).

D. Evaluation and Data Analysis

The Analysis and Selection Team evaluates the DESS from the SL through their non-mandatory subcriteria using the information gathered from the user's manual. At this point, it is strongly recommended to develop a base case related to the area of the business concerned. This base case has to be simple enough to be modelled with the demonstration versions.

An algorithm is used to quantify the results of the non-mandatory subcriteria evaluation. Steps taken to apply the algorithm are:

• Assignment of a value to each non-mandatory subcriterion, according to the evaluation scale chosen for that subcriterion.

• Multiplication of each of the assigned values by the importance-weight that corresponds to the related subcriterion.

• Addition of the value-importance products to calculate the resulting value for each category.

• Calculation of the percentage parameter Quality Rate (QR) for each category, as the ratio between the resulting value obtained for each category and the highest possible score for this category. The Quality Rate reveals how each DESS performs as compared with the ideal situation (100% compliance).

When the different softwares exhibit the same behaviour for a given subcriterion, this subcriterion is to be discarded because it does not contribute to the decision making process (Belton, 1984).

E. Sensitivity Analysis and Selection

The sensitivity analysis is the key activity in the decision making process because it allows the validation of the evaluation process consistency. It also provides first-rate information to establish the DESS ranking. As a result of this analysis it is possible to identify the strengths and weaknesses of the evaluated DESS, and subsequently, to select the DESS that objectively best adapts to the goals of the organization. This step is handled by the Analysis and Selection Team.

The sensitivity analysis relies on two fundamental strategies related to subjective issues that need further examination:

DESS Ranking Strategy. This strategy is designed to measure the impact on the DESS ranking by the weight variation of the different categories (Functionality, Usability, Efficiency). A Weighted Global Quality Rate (WGQR) is calculated to establish the DESS ranking. The following equation describes the calculation of the WGQR:

where, QR is the Quality Rate in category i, weight is the weight of the category i and (i = Functionality, Usability, Efficiency).

The sum of the weights of the three categories should total 100%.

Stability of the Results Strategy. The aim of this strategy is to analyze whether the software ranking maintains the same order upon variations made to the importance-weight assigned to the non-mandatory subcriteria. For this purpose, the importance-weights are modified for three scenarios abbreviated as Imp 5, Imp 5&4, and Imp 3.

Imp 5 scenario: In this case, subcriteria with importance-weights of 5 maintain this value, whereas the remaining subcriteria get values of 1 assigned. This strategy proves whether the software obtains the rating based on the high estimations in the metrics with maximum importance-weights.

Imp 5&4 scenario: In this case, subcriteria with importance-weight values of 5 and 4 keep their values, while the remaining subcriteria get values of 1 assigned. This scenario reveals whether the software obtains the rating based on the high evaluations in the metrics with importance-weight 5 and 4.

Imp 3 scenario: Maintain a value of 3 for the importance-weights of all the subcriteria. In this case, the DESS ranking only depends on the value assigned to the subcriteria during the software evaluation. If the software ranking order changes under this scenario, it means that the software selection relies on the relevance assigned to the subcriteria.

This comparative strategy requires a reference case. This case is named the expert-case. The expert-case shows the importance-weight for the non-mandatory subcriteria established by the Expert Team.

Table 3 summarizes the objectives of the scenarios designed to determine the stability of the DESS ranking results.

Table 3. Stability of the Results Strategy

The DESS Ranking Strategy assesses which category (Functionality, Usability, Efficiency) determines the DESS rating, whereas the Stability of the Results Strategy assesses whether the software ranking depends on the subjective importance-weights or values assigned by the work-teams to the subcriteria. The application of the different strategies will be illustrated in the case study presented in section III.

The methodology presented herein incorporates an additional level in the classic structure of evaluation methodologies available in the literature because it assesses the software's compliance with each characteristic by using criteria and subcriteria instead of only metrics. This extensive list of more than 100 detailed subcriteria could be used by any company to evaluate simulation software.

The proposed methodology provides the tools to select a DESS that fits best to the specific needs of final users, and support the decision using the suggested sensitivity analysis.


The selection of the DESS option that best adapts to the logistics system of the Venezuelan Oil Company (VOC) served as case study to evaluate the above described methodology. Following, a brief summary describes the main features of the organization and the characteristics of the area where the DESS is going to be applied, that are relevant for its selection. Then, the actual application of the proposed methodology is explained step by step.

A. Synopsis

The hydrocarbon handling, transport and distribution systems of the VOC comprise high complexity operations. After extraction from the wells, the produced oil flows through pipelines to storage tank farms. From there, the oil is sent via pipelines, either directly to the refineries for its processing, or to marine terminals for transportation to other refineries or to the international market.

At the receiving refinery's marine terminal, the tankers discharge the oil into storage tanks for future processing. Downstream, the refinery segregates and stores the different streams that are produced and finally, blends them according to the required end product specifications and demands, taking into account the inventory of the individual products. Trucks, oil tankers or pipelines finally deliver the end products to the final market and customers. According to Erzinger and Norris (2000), in this type of company, the logistics costs comprise between the 15-25% of the cost of the end-product. The value of the assets used in the logistics, either own or contracted facilities, could reach well beyond the 30% of the refinery's total assets. In addition, the authors estimate that these assets are used well below 50% of their capacity. Falconer and Guy (1998) state that chemical and hydrocarbon company's investment projects allocate approximately 40-50% of the invested capital to ancillary process facilities and to logistic.

These numbers are also valid for the VOC case and they clearly evidence significant economic benefits from optimization of logistic operations and facilities, as well as investment planning in this area. However, complexities of these processes impose the use of efficient simulation tools. Inadequate software leads to less efficient models, incapable of simulating the system's details, or the need to invest considerable time to develop and apply the models. Moreover, these drawbacks remain and their consequences repeat in time while the software is used. These arguments agree with Umeda and Jones (1997) who state that the time invested in the development and maintenance of the simulation models is of primary importance.

Conscious of these needs, the VOC carried out a study to select an appropriate DESS for the logistics of handling, transport and distribution of hydrocarbons. The evaluation and selection methodology proposed in this paper was applied and tested for the purpose of this study.

B. General Objectives Definition

The definition of the study case's general objectives was based on the general objectives established in the proposed methodology, adapted to particular application area and organization as follows:

• Area of application and use: The aim of the study case defines by itself this objective; that is, to simulate logistic operations related to handling, transport and distribution of hydrocarbons. Consequently, DESS not intended for this particular area were ignored.

• Particular aims of the organization: After a study of the company's characteristics and the application area, the time invested in the development and maintenance of the models was rated of prime importance (Umeda and Jones, 1997). As a result of fast response requirements together with the complexity of the application area, the selection of packages and not languages defined the second general objective. Compliance with this general objective would reduce time invested for the development of the simulation models.

In addition, capacity to simulate the behavior of continuous operations such as tank loading and unloading and fluid transportation, as well as discrete events such as delivery or transportation time of either tankers or trucks (Falconer and Guy, 1998) was deemed important. Consequently, the faculty to model hybrid systems defined an additional general objective.

The last general objective established that the DESS must rely on characteristics that guarantee its Functionality, Usability and Efficiency; attributes that would be defined using SQMO+.

C. Performance Subcriteria Classification

Once the Expert Team filled the questionnaires, the Analysis and Selection Team processed and analyzed them. Out of 133 subcriteria, this exercise classified 75 as non-mandatory (56%) and 58 as mandatory (44%) subcriteria.

The Expert Team established the importance-weight of the non-mandatory subcriteria based on the questionnaire responses. This team was composed by one external simulation consultant and two senior domain experts with more than 15 years experience in the logistic area of the VOC and one senior simulation researcher from Universidad Simón Bolívar, who also acted as project facilitator.

The Analysis and Selection Team was designated by the interested organization in the VOC and grouped two DESS end-users with experience in refining and simulation and the project manager who had experience in similar selection studies.

Work meetings of both teams were held to approve the importance-weights and to determine the evaluation scale that rates each non-mandatory subcriterion.

D. Screening

The survey developed by Swain (1999) was taken as a starting point. This procedure identified forty-four potentially useful software options (Long List, LL). From the LL, only five software options that complied with the general objectives described in section 3.2 qualified to comprise the Medium List (ML). Finally, rejection of the ML DESS that did not comply with the mandatory subcriteria rendered a Short List (SL) of 4 options: Extend 5.0 (Imagine That Inc.), Witness 2000 (Lanner Group), AutoMod 9.1 (Brooks Automation) and ProModel 2002 (ProModel Corporation). For practical operational purposes during the sensitivity analysis, these will be referred to as A, B, C, and D respectively.

E. Evaluation and Data Analysis

The four SL DESS were evaluated through the non-mandatory subcriteria. From the 75 non-mandatory subcriteria, 36 yielded the same value for all 4 DESS. The Quality Rates were computed with the remaining 39 subcriteria.

In the efficiency category, all four software options performed with 100% compliance; consequently, the four SL DESS could be adequately installed upon the technological platforms commonly used in the VOC. Since all four DESS guaranteed a proper performance, this category was excluded from any further analysis.

Figure 2 summarizes the Quality Rates (QR) for the remaining two categories. Variable X corresponds to the Usability category, whereas variable Y represents the Functionality category. Overall, the graph evidences that options B and D exhibited high performance for both categories considered.

Figure 2. Functionality-Usability QR : VOC.

B, C and D DESS presented high QR values (70-80%) for the Functionality category. This revealed that this software include a great number of functions demanded from a DESS that adapts to the application area and company type considered.

In the usability category, DESS B clearly stands out among the group (80% QR). This DESS has favourable attributes related to the ease of use and understanding of its functions, operations and concepts. DESS B also has an efficient on-line help and includes a great number of model examples related to the Oil Industry logistics. On the contrary, DESS A rendered a less than 50% QR in both categories because it lacks of some key features to efficiently deal with fundamental elements related to the Oil Industry logistics.

F. Sensitivity Analysis and Selection

DESS Ranking Strategy. Figure 3 shows the plot of the WGQR values obtained upon variations of the relative usability and functionality category weights, for all four DESS. Since only two categories were considered, their relative weights had to vary in a complementary fashion; that is, both had to sum 100%.

Figure 3. WGQR as a function of relative category weights: VOC.

As Fig. 3 evidences, the DESS B maintained its first place in the ranking over the entire weight variation range. Furthermore, the DESS ranking order remained unchanged for functionality category weight variations over the range from 20 and 90%. This was found acceptable during the working meeting discussions.

As a result of the Ranking Strategy, the definite rank order approved by both teams was, from best to worse compliance, DESS B > DESS D > DESS C > DESS A.

Stability of the Results Strategy. The behaviour of the DESS upon modifications of the importance-weight assigned to the non-mandatory subcriteria were analyzed. To compute the resulting WGQR, equal weights (50%) were assigned to both categories (usability and functionality), in the three proposed scenarios and to the expert-case (Fig. 4).

Figure 4. Results of the Stability Strategy: VOC.

Figure 4 shows that the relative DESS ranking remained unchanged for the three scenarios considered and the expert-case. Consequently, DESS B holds the first place above the other software, independently of the relevance assigned to the subcriteria.

Comparison of the different importance scenarios with the expert-case show a WGQR increase for DESS B, for strategies Imp5 and Imp5&4. This increase indicates that DESS B rated highest in those subcriteria with the largest importance-weights (5 and 4). The decrease of DESS B's WGQR value for Imp3 scenario confirms that the score of this DESS resulted basically from the attributes of importance-weights 4 and 5. The fact that DESS B maintains the highest WGQR value for Imp3 scenario also indicates that this software obtained a higher score than other software during the non-mandatory subcriteria evaluation.

After completing the technical evaluation, the recommended preference order had to be established in order to support the selection of the DESS that best adapts to the company and to the logistics area. The Sensitivity Analysis already demonstrated that the technical DESS ranking was stable when faced to variations of both relative category weights and importance-weight of the subcriteria. However, before making a final recommendation, the Analysis and Selection Team deemed important to analyze the financial aspect. This aspect was not included in the initial general objectives of the company, because the administrative procedures established that cost factor must be handled after completing the technical evaluation. Therefore, without compromising these procedures, a search of the approximate costs related to the purchase of the DESS was performed.

This investigation revealed similar costs for the top three DESS (B, D and C). Thus, no reversal in established preference order due to economic factors is likely, unless a substantial price reduction of any of the options pushes this option ahead of the others.

On the other hand, the purchase price for the DESS A was lower. When confronting these findings with the two work teams, they consensually agreed that the lower cost impact of DESS A could not offset the fact that this alternative consistently showed the poorest performance according to the objectives established in this research, and consequently, increase its competitiveness as compared to the other DESS.

After considering the results of the technical evaluation and the economic aspect, the Analysis and Selection Team generated the list of recommended DESS in hierarchical order according to the established objectives. It recommended the purchase of the DESS B, which included a cost framework and contractual aspects to be considered at the moment of the purchase by the company.


This paper provides a methodology strategy for the evaluation and selection of DESS software. The proposal allows an objectively founded selection of the software (DESS in this case) that best suits the needs of a specific company and application area. It comprises six steps, where the key evaluation factor resides in the use of a set of 40 criteria and 133 subcriteria, provided by the Systemic Quality Model (SQMO+). The application could aid any company to rationally evaluate and select a software for a given process.

A case study demonstrated the effectiveness of the proposed methodology. This study was centred in the logistics area (handling, transportation and distribution of hydrocarbons) of the state-owned VOC. It proved to be an exhaustive yet efficient process that reduced the available DESS options and consequently the required software evaluation time through a preliminary screening procedure.

The results of the evaluation process demonstrated that Witness 2000, ProModel 2001, AutoMod 9.1 and Extend 5 comply with the minimum technical requirements and specific goals established in the case study. A sensitivity analysis yielded a robust ranking order that firmly supported the final selection of the software. This allowed an objective purchase recommendation; Witness 2000, in this particular case.

This research was financed by the Research Deanship of the Universidad Simón Bolívar through the Tools of Support in the Logistics of the Oil Company (DI-CAI-004-02) project. Ultimately, we must acknowledge the support of the sellers who provided the demonstration version and documentation of their products.

1. Belton, V., "The Use of a Simple Multiple-Criteria Model to Assist in Selection from a Short List", In French S. (Ed.), Readings in Decision Analysis, Chapman and Hall, London, 122-138 (1984).         [ Links ]
2. Bass, L., P. Clements and R. Kasman, "Software Architecture in Practice", SCI Series in Software Engineering. 75, Addisson Wesley, Reading, MA (1998).         [ Links ]
3. Erzinger, F. and M. Norris, "Meeting the e-Chain Challenge: The Catalyst for Large Scale Change", Technical Report CC-00-157, Proc. NPRA 2000 Computer Conference, National Petrochemical & Refiners Association, Washington, D.C, 1-6 (2000).         [ Links ]
4. Falconer, D. and B. Guy, "Truly Optimal Offsites", The Chemical Engineer, March 26, 28-33 (1998).         [ Links ]
5. Hlupic, V., "Discrete-Event Simulation Software: What The Users Want", Simulation, 73, 362-370 (1999).         [ Links ]
6. Kepner, C. and B. Tregoe, "The New Rational Manager", Kepner-Tregoe, Inc., Princeton, N.J., 86-90 (1981).         [ Links ]
7. Mendoza, L., M. Pérez, A. Grimán and T. Rojas, "Algoritmo para la Evaluación de la Calidad Sistémica del Software", 2da Jornadas Iberoamericanas de Ingeniería del Software e Ingeniería del Conocimiento, Salvador, Brasil, 1-11 ( 2002).         [ Links ]
8. Nikoukaran, J., V. Hlupic and R. Paul, "Criteria for Simulation Software Evaluation", Winter Simulation Conference, Washington, 399-406 (1998).         [ Links ]
9. Rincon, G., M. Alvarez, M. Perez and S. Hernández, "A discrete-event simulation and continuous software evaluation on a systemic quality model: An oil industry case", Information & Management, 42, 1052-1066 (2005).         [ Links ]
10. Swain, J., "Simulation Software Survey", INFORMS OR/MS Today, 42-51 (1999).         [ Links ]
11. Schaefer, L. and C. Kirwood, "Model Selection for Simulation Design: A Multiobjective Decision Analysis Appoach with an Applicattion to Simulation Transport Agents", Simulation, 79, 83-93 (2003).         [ Links ]
12. Tewoldeberhan, T., A. Verbraek and E. Velentin, "An Evaluation and Selection Methodology for Discrete-Event Simulation Software", Winter Simulation Conference, 67-75 (2002).         [ Links ]
13. Umeda, S. and A. Jones, "Simulation in Japan: State-of-the-Art Update", Technical Report, National Institute of Standards and Technology (NIST), 4-6 (1997).
        [ Links ]

Received: November 6, 2006.
Accepted: Jannary 18, 2008.
Recommended by Subject Editor: José Pinto.

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons