This paper and the corresponding panel session focus on teaching undergraduate industrial engineering/operations research-related simulation courses. The format brings together four experienced instructors to discuss four questions involving the structure and topic outlines of courses, print and software teaching materials used, and general teaching methods/philosophy. The hope is to provide some experience-based teaching information for new and soon-to-be instructors and to generate discussion with the simulation education community.
The idea of this paper and the corresponding panel session is to discuss the teaching of undergraduate industrial engineering/operations research-related simulation courses. The mechanism that we’ve chosen is for four experienced instructors to answer four common questions about their courses, materials, and teaching methods/philosophies. Mixed in with the individual answers, we include brief discussions of our individual experiences (both good and bad) – experience is the best teacher, after all. While not possible in the paper, we also hope that the questions will spur audience participation during the panel session. The paper is organized such that each of the four questions is a major section (Sections 2-5) and each participant’s “answer” is a subsection under the question’s section. We end with some summary remarks.
Our course (Simulation) is based on 2 lecture hours + 3 lab hours per week over a 16-week semester. Over the last few years, we have begun to introduce video segments to supplement the lectures and labs. The major topics that we cover include:
All simulation courses at Georgia Tech (at the undergraduate and Masters levels) are based solely on lectures. During the last three years, the MS course has been offered jointly with an advanced undergraduate course. Until the mid-1990s the undergraduate class consisted of two hourly lectures and a single three-hour lab per week. This approach worked better than the current approach of three instructional hours per week, but the professor was also responsible for teaching the lab without getting any extra credit. The change occurred during the conversion from the quarter system to semesters.
The undergraduate and Masters courses cover all theoretical topics from the classical texts of Banks et al. (2009) and Law (2015), with the exception of design of experiments and variance reduction techniques. The skeleton of both courses consists of the items in Jeff’s list above, but the coverage of each topic depends on the class. At this junction, I wish to point out that I spend 60–90 minutes during the instruction of input data analysis with an introduction to kernel density estimation (KDE) for independent data sets. This topic has become more important with the emergence of “big data” with multiple modes. Fitting of KDEs can be handled very easily via a plethora of functions in Matlab, Python, and R, while sampling from the fitted densities can also be done effectively, e.g., via the algorithms in Section 3 of Hörmann and Leydold (2000). Ranking and selection methods are covered using a small set of slides and the “add-ins” in Arena and Simio by Kim and Nelson, while simulation-based optimization is covered “on the fly” using the OptQuest add-in. The undergraduate class covers a large range of modeling topics, including simulation of vehicle systems and conveyors. The emphasis on modeling is very important because of the rigor of our senior design class, where projects often involve a combination of heavy simulation modeling and analysis.
ORIE 4580 Simulation Modeling and Analysis at Cornell University is an introduction to Monte Carlo and Discrete-Event Simulation, with prerequisites sophomore-level probability/statistics and a second course in programming. The class is large, consisting of approximately 200 students, 110 of which are undergraduates and 90 of which are Masters students. This semester-long course entails two 75 minute lectures and a 2- hour recitation (lab) each week. Five of the lectures, that relate to learning how to use a particular discrete-event simulation package, are “flipped.” The course can be viewed as one of two halves, with the first half covering the essentials of Monte Carlo simulation and the second half continuing on with discrete-event simulation. The first half begins with a review of probability concepts, proceeds through output analysis for terminating simulations, random-number generation, random-variate generation, stochastic-process generation, input analysis, and concludes with a taste of variance reduction (antithetics). The second half begins with a review of queueing theory, and proceeds through a hand simulation, modeling with a simulation package (where the lectures are flipped), spends some time on comparing alternative systems, and concludes with some discussion of verification, validation, accreditation and project management.
The undergraduate simulation classes I have taught at Cornell and Berkeley have had the same format; 2 to 3 weekly lectures and a lab/recitation with exercises, two exams, and a term product. Both schools have semester-long courses. I only taught a quarter course once (at Florida) and it didn’t go well. My overarching goal is to challenge my students to recognize and ask well-posed questions and to identify the assumptions behind the answers. They also learn to assess if each assumption tends to be optimistic or pessimistic, and try to find out, through sensitivity analysis, how much it matters.
Structurally, my courses have been a mix of Art, Science, and Technology:
I use our textbook (Smith et al., 2017) as the primary text for our undergraduate and introductory graduate simulation courses. In addition, I also use a significant amount of supplementary material from Law (2015), Banks et al. (2009) and several WSC papers and related resources that I’ve collected during my teaching career. Over the last 5 years, I have started using quite a lot of video modules to supplement our lecture and lab sessions. These seem to have made a significant impact – especially as our typical undergraduate class size has grown to 90-110 students. With this many students, I have found that doing any significant modeling in a lecture session is not effective. I’ve found this same lack of effectiveness regardless of whether we’re modeling in Simio, Arena, Excel/@Risk, Python, or Matlab (the programming languages our students learn) – so I’m convinced that this phenomenon is not specific to a language/tool.
In response, my goal has been to move much of the “mechanical” aspects of modeling to the video modules. This lets the students consume (and hopefully learn) the material at their own pace. Ideally, this frees up lecture time to focus on concepts, philosophy, and more general discussion about the topics. My limited experience shows that this strategy works very well if the students use the video modules before exam/assignment time.
I am generally satisfied with the content of the available material, but it would be nice to have a single book covering this material. As an instructor, I’m looking for that single book that has sufficient coverage of the theoretical aspects and practical/tool-based content that I use. As a co-author of a textbook, I thought that this would be easy to create but was totally wrong in this thinking – hence I still use the mix of materials.
We have used the aforementioned text of Banks et al. (2009) for over 30 years, but now the book is aged, with no upcoming revisions in sight. There are texts that combine basic coverage of theoretical concepts with languages/packages, such as Arena and Simio, but in my humble opinion those texts fall a bit short on the coverage of the theoretical topics. Currently, in the undergraduate course I use the text by Smith et al. (2017) and a plethora of slide presentations that are available from a Georgia Tech portal. The Master’s course is based on the text of Law (2015) and the Simio text above. In the last few years, I have also used extensively the workbook by Joines and Roberts (2015).
An issue with texts using Simio as the modeling tool was the frequent updates of the software with enhanced object definitions and properties. I should point out that this frequency has decreased during the last year as Simio has matured substantially. Despite these changes, both Simio-based texts are extremely useful and offer a proper coverage of the main modeling concepts and software components. The availability of the instructional videos by Jeff Smith really helps. In the past, my life as an instructor was more stable as I taught GPSS, Arena, and AutoMod, all of which were very mature.
I primarily use a coursepack that I wrote, with some material adapted from notes shared by various luminaries, particularly Peter Glynn and Barry Nelson. I also make Law’s “Simulation Modeling and Analysis” recommended reading but not required, and I make available on desk copy at the library many other standard books, which, by the way, are rarely used. I also use a set of labs that have been developed over time. They are a mix of written classroom problems and computing problems.
Students are expected to enter this class having taken sophomore-level probability and statistics, and a second course in programming. For students that are taking the class as undergraduate Operations Research majors at Cornell this background is somewhat recent. For the other students, including Master of Engineering students with undergraduate degrees from other universities, the background is present to a varied extent. Those with weaker probability/statistics backgrounds struggle in the first month of the course. It would be extremely helpful to have a set of, say, 100 multiple-choice questions that students could take to self-assess their preparation for the course, and for those who do not have that background, a tutorial of some form to help them attain the requisite background. This tutorial would be closely related to undergraduate textbooks in probability and statistics, although it would likely be fairly selective. At present I simply point students to a book and suggest a set of chapters to absorb. This solution does not seem ideal.
The textbook by Choi and Kang (2013) has been a godsend for my undergraduate course. I only cover the first Section, since this book is detailed and dense, but it is very clearly written at an easy mathematical level. I can assign reading and exercises from this book without teaching “from it”. With this text it is like my students have already taken a course in simulation. Their questions and discussions are at a qualitatively deeper level. I also reference sections in Law’s classic text (Law, 2015), which I use in my graduate course. Again, I don’t have to lecture from Law’s text, and can teach as if my grad students have already had a simulation course as a prerequisite. I have never “followed” a text book, but with a good one I can almost squeeze two courses worth of material into one.
I use Simio for the discrete-event simulation portion of the course and Excel, Python, and Matlab for the Monte Carlo portion. Our undergraduate students take Matlab- and Python-based programming courses prior to taking simulation. We teach the Python course in the ISE department and actually cover the programmatic aspects of Monte Carlo in that class – this lets us hit the ground running when they get to simulation. This Fall, I’m planning to incorporate @Risk as part of the Monte Carlo and Input Analysis portions of the class.
Several years ago we also did event-oriented simulation using C and/or Java based on the exposition in Law (2015), but I gave that exercise up as the classes grew in size. I continue to think that was a great learning tool and enhanced students’ understanding of the discrete-event simulation mechanism, but there was simply too much overhead to do the programming parts with large classes.
I have been using Simio for the last 6+ years. I have also used ExpertFit by Averill Law and Associates for input data analysis during the past 20+ years. We do require knowledge of a scripting language, such as Matlab or Python. Although my students take required courses for these languages, I often discover that their training is inadequate.
During the last two years, I have also adopted the The SIPmath Modeler Tools for Excel from www.probabilitymanagement.org to teach estimation issues related to risk and error in simulation experiments. I use these tools at the onset of all simulation courses, and have found them to be very useful for spreadsheet-based Monte Carlo simulation experiments. Although this fundamental topic is addressed in Simio with the (S)MORE plot proposed by Barry Nelson, the global footprint of Excel and the ability to conduct experiments and observe the impact of randomness on the estimates of risk and error makes this tool invaluable. For systems that can be modeled using spreadsheets, these tools are also useful for teaching estimation of steady-state means and quantiles based on independent replications.
As noted above, students are expected to have had a second course in programming. Such a background is more than is really needed, because any programming is restricted to very small homework problems that can usually be completed in a few lines. One of the reasons we require a second course in programming is simply to be able to rely on an algorithmic way of thinking that is essentially a second pillar to support learning and understanding simulation.
Students learn the basics of Monte Carlo (model logic, multiple replications, confidence intervals) through “raw” spreadsheet modeling, placing a single replication in a single row and “filling down” to obtain multiple replications. Only after those ideas are mastered do we turn to the use of the @Risk spreadsheet plug-in. In the homework I may ask a simple programming task, such as coding the thinning algorithm for generating nonhomogeneous Poisson processes on the line. In the second half of the course on discrete-event simulation we use Simio. The course includes 5 lectures where we learn the basics of Simio modeling. Those lectures are flipped, and held in a computer lab rather than a lecture hall. For each lecture I require students to come to class having already watched a module from Jeff Smith’s excellent series of such modules, and I then give them a modeling exercise to work on. Through this period we also continue the recitations (labs) so the students are essentially getting a “double dose” of labs, in place of the usual lecture plus lab format.
As mentioned earlier, I show my students 3 tools from the perspectives of 3 different worldviews so they can see similarities, not just differences. The software I use has changed over the years. I always start with Sigma, which is minimalist, robust, and stable (I started developing it 3 decades ago). It is designed for teaching the fundamentals of all three worldviews, and students can learn everything about event graphs in less than 30 seconds, they like that. I am very proud of RTMS for very large and complex activity-interaction models (I co-Founded Bio-G a decade ago, but did none of the development). I have moved among the many commercial transient-entity flow languages over the years, but lately have been happy using Simio. However, I recommend switching periodically just to keep current – or whenever there is WSC buzz about some radical innovation (which is rarely true). These languages are all pretty similar for teaching purposes, and the advanced details are perishable knowledge – these will change before students will be in a position to use them professionally. I recommend everyone attend the vendor short courses before using any commercial software in their jobs, even if I have just finished teaching it.
Simulation requires skills in the general areas of (1) probability and statistics; (2) programming; and (3) modeling. Early in my teaching career, I assumed that students gained the requisite knowledge in each of these areas through the prerequisite courses (two semesters of probability and statistics, two semesters of programming, two semesters of OR, for our students) and that I would just tie these things together under the umbrella of simulation. What became apparent to me over the first few years (most unfortunate that it took me this long!) was that students required significant reinforcement/review of “prerequisite material” before they could productively apply these general topics/tools to simulation. As a result, my classes now cover approximately 1/3 less “new” simulation-related material than they did early in my career. From a teaching philosophy perspective, this simply means that I spend much time and effort on the integration of the three basic skills and this often requires going back to basic material that students have already seen in prerequisite courses so that we can view these as an integrated “whole” in the form of simulation.
As mentioned previously, I have not found it especially effective to lead students through modeling/programming exercises in a “follow-me” fashion in a computer lab. Students’ skills and comfort levels when it comes to computer applications/programming vary drastically (and often not relative to their GPA). As such, regardless of the pace that I go, 1/3 to 1/2 of the class will be either bored or lost in this environment. Instead, I have found it more effective to combine focused lectures with video modules on the same topics. As an example, I use this method for the following three fairly unrelated topics:
For each of these, I present the material in a lecture and tell the students: “put your pencils down, shut your notebook computers, and just watch – there is a video module of me doing the exact same development that you can view and re-view at your own pace later.” The idea is that they can focus on the concepts and big picture first, knowing that they won’t lose access to the details that they will need for exams and/or to assignments. I currently have 8-10 similar lecture+video combinations that I use through the semester (in addition to several video-based lab assignments). While I have not done a controlled experiment, anecdotal evidence suggests that my students perform significantly better when I use this method than they do when they try to take detailed notes and/or follow me on the computer. I continue to develop these modules and all are freely available at http://jsmith.co/node/26 (note that I was surprised and continue to be little embarrassed that some of these modules are prominently mentioned by my fellow panelists in this paper – that was definitely not my indent when I initiated the panel!).
Finally, I have found that semester projects are very important and can definitely crystalize all of the simulation-related topics that the students learn over the semester. I have tried several approaches from using the annual Arena and Simio student competitions, to using case-studies developed from consulting projects, to letting students choose their own project topics and all have proven valuable (although students often find choosing their own projects more difficult than they anticipate). As our classes sizes have grown, I’ve switched from live “final presentations” to video presentations uploaded to YouTube. While this is a different experience than students get presenting in front of a live audience including their peers, I have not found a practical way to have live, in-class presentations once there are more than 10-15 project groups in a given class.
To explain my philosophy, I have to discuss my background and training in simulation. I took a graduate course from George Fishman in 1984. The class required building discrete-event models using Simscript II.5, and the only model we built was the famous African Port model that still appears in the text of Law (2015). We built two variants, the first was based the event-scheduling mechanism and the second used the process interaction paradigm. To understand the differences, we had to report compilation and linkage times on the mainframe computer. The fact that we used the same language made us appreciate the fundamental differences between the two modeling paradigms. We analyzed the simulation output using subroutines from Fishman’s (1978) text: the first used the regenerative approach, while the second used a batch-means algorithm. George graded the handwritten reports very carefully and took points off for failure to prioritize the list of findings and recommendations (of course, he could do this because the class consisted of only 12 students). George emphasized his preference of a long run over multiple short replications for estimating steady-state means and quantiles. Overall, his teaching and academic advice shaped my philosophy.
When I arrived at Georgia Tech, I found that Jerry Banks, Dave Goldsman, and Jim Swain were using GPSS/PC for undergraduate instruction and the Simlib package from the text of Law and Kelton (1982) for graduate instruction. I recall studying Thom Schriber’s (1974) “red book” and that GPSS/PC allowed “animation” by highlighting active GPSS blocks—this could be done using portable PCs and three-beam projectors, but I can ascertain that such cursory animation allowed the students to appreciate entity movement. On the other hand, the Simlib package was based on the event-scheduling mechanism and was written in Fortran (actually one of my students rewrote the package in C before the C code appeared in the third edition of the text). We switched to GPSS/H in the early 1990s (without using Proof Animation) until the late 1990s when we adopted Arena. I used Arena in both undergraduate and MS courses until about 2008, when I switched to AutoMod for about two years. I found AutoMod to be very appealing because of the embedded simulation language, its excellent graphical interface, its ability to model warehousing systems with appropriate amount of detail, and the existence of an introductory text by Jerry Banks, which is still included in the installation. In January 2011, I made the last switch to Simio, and I have remained “loyal” to it until now.
My instructional mix between modeling and theory has remained quite steady throughout my career, with two changes in the last three years: (a) Early use of spreadsheets to illustrate the concepts of error and risk. I have uploaded the paper of Barry Nelson from the 2008 Proceedings of WSC on my portal site, and I repeatedly test students on their ability to interpret histograms, empirical distribution functions, and confidence intervals for means and quantiles. (b) Increased emphasis on input data analysis and distribution fitting, and the effects of “misfitting” on the quality of the simulation output. I find that the perfect balance between simulation modeling and analysis is a real challenge! The ability to conduct a solid statistical analysis of simulation data is a powerful tool for operations research analysts and industrial engineers; in my humble opinion, this is what separates them from computer scientists who can deliver excellent codes/models, but have rather limited statistical capabilities.
I find teaching software without a lab session a great challenge. To overcome the time constraints in a single class session, I usually upload an intermediate model to the class portal, ask the students to study the relevant portion of text (and probably ahead of it), and then proceed with building the remainder of the full model in class. The aforementioned challenge is partially due to the extensive ownership of Apple laptops by students; this typically necessitates logging to a virtual desktop and using an Apple keyboard. I understand that this is not equivalent to lab instruction, but this is the best I can do with our large class sizes ranging roughly between 40 to 80 per section.
I would like to finish with the recent habit of the students to study only electronic material. I urge them to purchase hard copies of the texts—most not only ignore my advice, but they don’t even save the electronic files locally. I guess things are not going to get better in the foreseeable future, and we all have to adopt.
These thoughts are perhaps best presented as bullets:
I used to teach that a “model” was a noun, a tool you build and use for analysis. Mainly learning-by-doing in my consulting work, I now teach “to model” as a verb, an active verb with a direct object. We model every aspect of our lives to structure our thinking and communicate ideas. Therefore my course outline is no longer linear, following the textbook “steps in a simulation study”. Now my course outline, like my professional work, is composed of concurrently interacting cycles illustrated in Fig. 1. (Note there is a “Start” but no “Finish”.) I find this works well in consulting. It is a joy to watch my students mature from simple linear thinkers to creative artists.
Most Industrial Engineering and many quantitative business programs offer courses in simulation and teaching methods for these courses are varied. This paper (and the conference panel) focused on the methods, experiences, and methods/philosophies of four experienced instructors. While we don’t claim to have perfect answers to all questions or optimal methodologies, we hope that other instructors and future instructors will find the some of the information useful.