INTRODUCTION TO OPERATION RESEARCH
INTRODUCTION
Although it is a distinct discipline in its own
right, Operations Research (O.R.) has also become an integral part of the
Industrial Engineering (I.E.) profession. 
This is hardly a matter of surprise when one considers that they both
share many of the same objectives, techniques and application areas.  O.R. as a formal subject is about fifty years
old and its origins may be traced to the latter half of World War II.  Most of the O.R. techniques that are commonly
used today were developed over (approximately) the first twenty years following
its inception.  During the next thirty or
so years the pace of development of fundamentally new O.R. methodologies has
slowed somewhat.  However, there has been
a rapid expansion in (1) the breadth of problem areas to which O.R. has been
applied, and (2) in the magnitudes of the problems that can be addressed using
O.R. methodologies.  Today, operations
research is a mature, well-developed field with a sophisticated array of
techniques that are used routinely to solve problems in a wide range of
application areas
This
chapter will provide an overview of O.R. from the perspective of an Industrial
Engineer.  A brief review of its
historical origins is first provided. 
This is followed by a detailed discussion of the basic philosophy behind
O.R. and the so-called “O.R. approach.” 
The chapter concludes with several examples of successful applications
to typical problems that might be faced by an Industrial Engineer.  Broadly speaking, an O.R. project comprises
three steps: (1) building a model, (2) solving it, and (3) implementing the
results.  The emphasis of this chapter is
on the first and third steps.  The second
step typically involves specific methodologies or techniques, which could be
quite sophisticated and require significant mathematical development.  Several important methods are overviewed
elsewhere in this handbook.  The reader
who has an interest in learning more about these topics is referred to one of
the many excellent texts on O.R. that are available today and that are listed
under "Further Reading" at the end of this chapter, e.g., Hillier and
Lieberman (1995), Taha (1997) or Winston (1994).
       A
HISTORICAL PERSPECTIVE
While there is no clear date that marks the
birth of O.R., it is generally accepted that the field originated in England
during World War II.  The impetus for its
origin was the development of radar defense systems for the Royal Air Force, and
the first recorded use of the term Operations Research is attributed to a
British Air Ministry official named A. P. Rowe who constituted teams to do
“operational researches” on the communication system and the control room at a
British radar station.  The studies had
to do with improving the operational efficiency of systems (an objective which
is still one of the cornerstones of modern O.R.).  This new approach of picking an “operational”
system and conducting “research” on how to make it run more efficiently soon
started to expand into other arenas of the war. 
Perhaps the most famous of the groups involved in this effort was the
one led by a physicist named P. M. S. Blackett which included physiologists,
mathematicians, astrophysicists, and even a surveyor.  This multifunctional team focus of an
operations research project group is one that has carried forward to this
day.  Blackett’s biggest contribution was
in convincing the authorities of the need for a scientific approach to manage
complex operations, and indeed he is regarded in many circles as the original
operations research analyst.
O.R. made its way to the United States a few
years after it originated in England. 
Its first presence in the U.S. was through the U.S. Navy’s Mine Warfare
Operations Research Group; this eventually expanded into the Antisubmarine
Warfare Operations Research Group that was led by Phillip Morse, which later
became known simply as the Operations Research Group.  Like Blackett in Britain, Morse is widely
regarded as the “father” of O.R. in the United States, and many of the
distinguished scientists and mathematicians that he led went on after the end
of the war to become the pioneers of O.R. in the United States.
In the years immediately following the end of
World War II, O.R. grew rapidly as many scientists realized that the principles
that they had applied to solve problems for the military were equally
applicable to many problems in the civilian sector.  These ranged from short-term problems such as
scheduling and inventory control to long-term problems such as strategic
planning and resource allocation.  George
Dantzig, who in 1947 developed the simplex algorithm for Linear Programming
(LP), provided the single most important impetus for this growth.   To this day, LP remains one of the most
widely used of all O.R. techniques and despite the relatively recent
development of interior point methods as an alternative approach, the simplex
algorithm (with numerous computational refinements) continues to be widely
used.  The second major impetus for the
growth of O.R. was the rapid development of digital computers over the next
three decades.  The simplex method was
implemented on a computer for the first time in 1950, and by 1960 such
implementations could solve problems with about 1000 constraints.  Today, implementations on powerful
workstations can routinely solve problems with hundreds of thousands of
variables and constraints.  Moreover, the
large volumes of data required for such problems can be stored and manipulated
very efficiently.
Once the simplex method had been invented and
used, the development of other methods followed at a rapid pace. The next
twenty years witnessed the development of most of the O.R. techniques that are
in use today including nonlinear, integer and dynamic programming, computer
simulation, PERT/CPM, queuing theory, inventory models, game theory, and
sequencing and scheduling algorithms. 
The scientists who developed these methods came from many fields, most
notably mathematics, engineering and economics. 
It is interesting that the theoretical bases for many of these
techniques had been known for years, e.g., the EOQ formula used with many
inventory models was developed in 1915 by Harris, and many of the queuing
formulae were developed by Erlang in 1917. 
However, the period from 1950 to 1970 was when these were formally
unified into what is considered the standard toolkit for an operations research
analyst and successfully applied to problems of industrial significance.  The following section describes the approach
taken by operations research in order to solve problems and explores how all of
these methodologies fit into the O.R. framework
 Read More
Add your Comment
0
comments
Read More
Add your Comment
0
comments
What is Econometrics
Introduction
1.1   What is Econometrics?
The  term  “econometrics” is believed  to  have
 been  crafted  by  Ragnar  Frisch  (1895-1973)
 of
Norway, one of the three principle founders
of the Econometric Society, …rst editor of the journal Econometrica, and
 co-winner  of the
 …rst
 Nobel Memorial
 Prize
 in Economic
Sciences in 1969.  It is therefore
…tting that we turn to Frisch’s own words in the introduction
to the …rst issue of Econometrica for an explanation of the discipline.
A word of explanation regarding
 the term  econometrics  may be in order.  Its
de…ni- tion is implied in the statement of the scope of the [Econometric]
 Society, in Section
I of the Constitution, which reads:
 “The
 Econometric Society is an international society
for the advancement of economic theory  in its relation  to statistics and mathematics.... Its main object shall be to promote studies that aim at a uni…cation of the theoretical- quantitative and the empirical-quantitative approach to economic
problems....”
But there are several aspects of the quantitative approach to economics, and no single one
of these aspects, taken by itself, should be confounded with econometrics.  Thus, econometrics  is by no means
 the
 same as economic statistics.
 Nor is it identical
 with what
we call general economic
theory,  although
 a considerable portion  of this theory
has a de…ninitely quantitative character. Nor should econometrics  be taken  as synonomous with  the  application of mathematics to  economics.   Experience
 has
 shown  that each
of these  three
 view-points,
 that of statistics, economic  theory,   and  mathematics, is
a necessary, but not by itself a su¢cient, condition for a real understanding of the quantitative relations in modern economic life. 
It is the uni…cation of all three that is powerful.  And it is this uni…cation that constitutes econometrics.
Ragnar
 Frisch,
 Econometrica, (1933), 1, pp.
 1-2.
This de…nition remains valid today, although
some terms have evolved
somewhat in their usage. Today,
 we would  say
 that econometrics
 is the  uni…ed  study
 of economic  models,
 mathematical statistics, and economic
data.
Within  the …eld of econometrics there are sub-divisions and specializations. Econometric theory concerns
 the
 development
 of tools and  methods,
 and
 the
 study
 of the  properties
 of econometric
methods.    Applied  econometrics  is a term
 describing
 the
 development
 of quantitative
 economic
models and the application of
econometric  methods
 to these models using economic
data.
1.2   The Probability Approach to Econometrics
The unifying
methodology  of modern econometrics  was articulated by Trygve Haavelmo (1911-
1999) of Norway,
 winner
 of the  1989
Nobel Memorial  Prize  in Economic
 Sciences, in his seminal
1
paper “The probability approach in econometrics”, Econometrica (1944).  Haavelmo argued that quantitative economic models  must
 necessarily
 be probability
 models  (by
 which  today  we would mean  stochastic).  Deterministic models are blatently inconsistent
 with
 observed  economic quan- tities,  and
 it
 is incoherent
 to
 apply
 deterministic models  to  non-deterministic data.    Economic
models should  be explicitly  designed  to  incorporate randomness; stochastic errors
 should
 not
 be simply added  to deterministic models to make them
 random.   Once
we acknowledge
that
an eco- nomic model is a probability model, it follows naturally
that an appropriate tool way to quantify,
estimate, and  conduct  inferences  about  the  economy
 is through  the  powerful  theory  of mathe- matical statistics.  The
appropriate method for a quantitative economic analysis follows from the probabilistic construction of the economic model.
Haavelmo’s probability approach
was quickly embraced by the economics profession. Today no quantitative work in economics
shuns its fundamental vision.
While all economists embrace the probability
approach, there has been some evolution in its implementation.
The structural approach is the closest to Haavelmo’s original
idea.  A probabilistic economic model is speci…ed, and the quantitative analysis performed under the assumption that
the economic model
is correctly speci…ed. Researchers often describe
this as “taking their model seriously.” The structural approach
typically leads to likelihood-based analysis,
including maximum likelihood and Bayesian  estimation.
A  criticism  of the  structural
 approach is that it  is misleading  to  treat an  economic  model as correctly
speci…ed.   Rather, it is more accurate
to view a model as a useful abstraction or approximation. In
this case, how should we interpret
structural econometric analysis? The quasi- structural approach
to inference views a structural economic model as an approximation
rather than  the truth. This theory  has led to the concepts  of the pseudo-true value (the
 parameter value de…ned by the estimation
problem), the quasi-likelihood function, quasi-MLE,
and quasi-likelihood inference.
Closely related is the semiparametric approach. A probabilistic
economic model is partially
speci…ed but some features are left unspeci…ed.
 This approach typically
leads to estimation methods such as least-squares and the Generalized Method of Moments.  
The semiparametric approach dominates  contemporary econometrics,
 and is the main focus of this textbook.
Another  branch  of quantitative structural economics is the  calibration approach.  Similar to the quasi-structural approach, the
calibration approach interprets structural models as approx-
imations and hence inherently false.   The di¤erence is that the calibrationist literature rejects mathematical statistics as inappropriate for approximate
models,  and  instead
 selects parameters
by matching  model and data  moments
 using non-statistical ad
hoc1  methods.
1.3   Econometric Terms and Notation
In a typical application, an econometrician has a set of repeated measurements
on a set of vari- ables. For example,
in a labor application
the variables could include
weekly earnings, educational attainment, age, and other descriptive  characteristics. We call this information  the data, dataset, or sample.
We use the term observations to
refer to the distinct repeated  measurements on the variables. An individual observation often corresponds to a speci…c economic
unit, such as a person, household,
corporation, …rm, organization, country, state, city or other geographical region.  An individual observation could  also be a measurement  at  a point  in time,
 such  as quarterly GDP
 or a daily interest
rate.

Economists typically denote variables by the italicized roman characters y, x; and/or z: The convention  in econometrics
 is to use the 
character y to denote
 the  variable
 to be explained,  while
1 Ad hoc means  “for this purpose”
 – a method designed for a speci…c problem  – and
 not based  on a generalizable principle.
the characters x
and z are used to denote
 the conditioning  (explaining)  variables.
Following mathematical convention,  real numbers
 (elements  of the real line R) are written using lower case italics such as y, and vectors (elements
 of Rk ) by lower case bold italics such as x; e.g.
  0 x1   1
x = B x2   C 
B     C      
   xk    
Upper case bold italics
such as X are used for matrices.
We typically  denote  the  number  of observations by the  natural number
 n; and  subscript  the variables  by the index i to denote
 the individual  observation, e.g.  yi; xi  and zi. In some contexts we use indices other  than  i, such as in time-series
 applications where the 
index t is common,  and in panel studies
 we typically use the double index it to refer to individual  i at a time period t.
The i’th observation
is the set (yi; xi; zi):
It is proper  mathematical practice  to use upper  case
X  for random
 variables  and
lower case x for realizations
 or speci…c values.
 This practice  is not commonly followed in econometrics
 because instead  we use upper  case
to denote  matrices.   Thus
 the  notation yi will in some places
refer to a random
 variable,  and in other  places
a speci…c realization. Hopefully there
 will be no confusion as the use should be evident from the context.
We typically use Greek letters
 such as   ;   and    2  to denote  unknown
 parameters of an econo-
metric  model,
 and
 will use boldface,  e.g. 
    or    , when
 these
 are
 vector-valued.
 Estimates are
typically denoted  by putting a hat  “^”, tilde  “~”  or bar
 “-” over the  corresponding
 letter,
 e.g.
  ^
and  ~ are estimates  of   :
The
 covariance
 matrix
 of an econometric
 estimator will typically be written using the  capital

boldface V ; often
with  a subscript
 to denote  the  estimator, e.g.  V  b
= var   pn
 b     
      as the

covariance  matrix
 for pn
 b     
    : Hopefully without causing confusion, we will use the notation

V   =
avar( b ) to  denote  the  asymptotic covariance
 matrix
 of pn
 b     
    (the  variance  of the
asymptotic distribution).
 Estimates will be denoted  by appending
 hats
 or tildes,  e.g. estimate Vb    is an of V   .
 Read More
Add your Comment
0
comments
Read More
Add your Comment
0
comments











