E. Woodrow Eckard and Marlene A. Smith
Managerial and Decision Economics, Vol 33, Issue 7-8, Oct 2012, pages 463–473
We provide empirical estimates of the revenue benefits of multi-tier pricing at a major US pop music venue. Our unique sample includes data on the number of tickets sold at every price. Mean revenue gain from multi-tier pricing is estimated to be about $20,000 per show, a 4.2% increase over uniform pricing, although the gains were as high as 21.2% for one performer. We also provide evidence that customer segmentation by income is a likely motive of multi-tier pricing and, for the first time, that the standard assumption of zero marginal cost of additional venue attendees is valid.
Chenlei Leng and Cheng Yong Tang
Journal of the American Statistical Association. 107 1187-1200.
Matrix-variate observations are frequently encountered in many contemporary statistical problems due to a rising need to organize and analyze data with structured information. In this article, we propose a novel sparse matrix graphical model for these types of statistical problems. By penalizing, respectively, two precision matrices corresponding to the rows and columns, our method yields a sparse matrix graphical model that synthetically characterizes the underlying conditional independence structure. Our model is more parsimonious and is practically more interpretable than the conventional sparse vector-variate graphical models. Asymptotic analysis shows that our penalized likelihood estimates enjoy better convergent rates than that of the vector-variate graphical model. The finite sample performance of the proposed method is illustrated via extensive simulation studies and several real datasets analysis
Cheng Yong Tang and Yongsong Qin
Biometrika (2012), pp. 1–7
We explore the use of estimating equations for efficient statistical inference in case of missing data. We propose a semi-parametric efficient empirical likelihood approach, and show that the empirical likelihood ratio statistic and its profile counterpart asymptotically follow central chi-square distributions when evaluated at the true parameter. The theoretical properties and practical performance of our approach are demonstrated through numerical simulations and data analysis.
Yingying Fan and Cheng Yong Tang
Journal of the Royal Statistical Society, In Press
Determining how to appropriately select the tuning parameter is essential in penalized likelihood methods for high-dimensional data analysis. We examine this problem in the setting of penalized likelihood methods for generalized linear models, where the dimensionality of covariates p is allowed to increase exponentially with the sample size n. We propose to select the tuning parameter by optimizing the generalized information criterion (GIC) with an appropriate model complexity penalty. To ensure that we consistently identify the true model, a range for the model complexity penalty is identified in GIC. We find that this model complexity penalty should diverge at the rate of some power of log p depending on the tail probability behavior of the response variables. This reveals that using the AIC or BIC to select the tuning parameter may not be adequate for consistently identifying the true model. Based on our theoretical study, we propose a uniform choice of the model complexity penalty and show that the proposed approach consistently identifies the true model among candidate models with asymptotic probability one. We justify the performance of the proposed procedure by numerical simulations and a gene expression data analysis.
Haibo Wang, Gary Kochenberger and Fred Glover
Computers & Operations Research, Vol. 39, Issue 1, Pages 3–11
The quadratic knapsack problem (QKP) has been the subject of considerable research in recent years. Despite notable advances in special purpose solution methodologies for QKP, this problem class remains very difficult to solve. With the exception of special cases, the state-of-the-art is limited to addressing problems of a few hundred variables and a single knapsack constraint.
In this paper we provide a comparison of quadratic and linear representations of QKP based on test problems with multiple knapsack constraints and up to eight hundred variables. For the linear representations, three standard linearizations are investigated. Both the quadratic and linear models are solved by standard branch-and-cut optimizers available via CPLEX. Our results show that the linear models perform well on small problem instances but for larger problems the quadratic model outperforms the linear models tested both in terms of solution quality and solution time by a wide margin. Moreover, our results demonstrate that QKP instances larger than those previously addressed in the literature as well as instances with multiple constraints can be successfully and efficiently solved by branch and cut methodologies.
Fred Glover, Jin-Kao Hao, and Gary Kochenberger
International Journal of Metaheuristics Vol. 1, Num. 4, Pages: 317-354
The class of problems known as quadratic zero-one (binary) unconstrained optimisation has provided access to a vast array of combinatorial optimisation problems, allowing them to be expressed within the setting of a single unifying model. A gap exists, however, in addressing polynomial problems of degree greater than 2. To bridge this gap, we provide methods for efficiently executing core search processes for the general polynomial unconstrained binary (PUB) optimisation problem. A variety of search algorithms for quadratic optimisation can take advantage of our methods to be transformed directly into algorithms for problems where the objective functions involve arbitrary polynomials. Part 1 of this paper (Glover et al., 2011) provided fundamental results for carrying out the transformations and described coding and decoding procedures relevant for efficiently handling sparse problems, where many coefficients are 0, as typically arise in practical applications. In the present part 2 paper, we provide special algorithms and data structures for taking advantage of the basic results of part 1. We also disclose how our designs can be used to enhance existing quadratic optimisation algorithms.
Marlene A Smith
The American Statistician,Vol. 65, Issue 3, Pages: 190-197.
This article describes common yet subtle errors that students make in self-designed multiple regression projects, based on experiences in a graduate business statistics course. Examples of common errors include estimating algebraic identities, overlooking suppression, and misinterpreting regression coefficients. Advice is given to instructors about helping students anticipate and avoid these common errors; recommended tactics include extensive written guidelines supplemented with in-class active-learning exercises. …
Fred Glover, Jin-Kao Hao, and Gary Kochenberger
International Journal of Metaheuristics, Vol. 1, Num. 3, Pages: 232-256
The class of problems known as quadratic zero-one (binary) unconstrained optimisation has provided access to a vast array of combinatorial optimisation problems, allowing them to be expressed within the setting of a single unifying model. A gap exists, however, in addressing polynomial problems of degree greater than 2. To bridge this gap, we provide methods for efficiently executing core search processes for optimisation problems in the general polynomial unconstrained binary (PUB) domain. A variety of search algorithms for quadratic optimisation can take advantage of our methods to be transformed directly into algorithms for problems where the objective functions involve arbitrary polynomials. In this Part 1 paper, we give fundamental results for carrying out the transformations. We also describe coding and decoding procedures that are relevant for efficiently handling sparse problems, where many coefficients are 0, as typically arise in practical applications. In a sequel to this paper, Part 2, we provide special algorithms and data structures for taking advantage of the basic results of Part 1. We also disclose how our designs can be used to enhance existing quadratic optimisation algorithms.
Sarah Kovoor-Misra, and Marlene A. Smith
Leadership & Organization Development Journal, Vol. 32 Iss: 6, pp.584 – 604
Purpose – This paper aims to investigate the extent to which individuals’ identification with a changed organizational artifact is associated with their cognitive, behavioral, and affective support for change in the later stages of a change effort, and the role of contextual variables in mediating these relationships.
Design/methodology/approach – Primarily quantitative with some qualitative data from an online organization that had acquired the non-personnel assets of its competitor.
Findings – The paper finds that: artifacts can be an important part of employees’ perceptions of their organizations; artifact identification is associated with cognitive and behavioral support in the later stages of a change effort; a positive perception of the change mediates between identification and cognitive and behavioral support, and also facilitates affective support; emotional exhaustion is a marginal mediator; and trust towards top managers does not play a mediating role.
Research limitations/implications – Future research could study the factors that influence artifact identification. Studies of support for change must address its various dimensions to more accurately assess support.
Practical implications – During the later stages of change, managers can foster artifact identification, highlight the positives, and reduce emotional exhaustion to ensure support.
Originality/value – This study is one of the first to examine the relationship between artifact identification and support for change in the later stages of a change effort, and the mediating role of contextual factors. In addition, it investigates the multi-dimensional aspects of support for change, an area that has received limited empirical research attention.
Mark Lewis and Gary Kochenberger
International Journal of Mathematical Modelling and Numerical Optimisation, Vol. 1, Issue 4, Pages: 259-273
This paper presents a simple, two-stage method, implemented in the commonly available Excel spreadsheet program, for quickly finding high quality solutions to larger, more difficult instances of the capacitated task allocation problem (CTAP) than have been previously reported. In Stage 1, an innovative modification of the approximation method of Griffith and Stewart is used to reformulate CTAP and find a near-integer solution. Based on the partial solution from Stage 1, the remaining tasks are allocated in a greatly reduced quadratic CTAP solved in Stage 2. Our results show that this approach yields very good solutions relatively quickly to very large problems (1,000 binary variables and 49,500 quadratic terms in the objective function). Ours is the first paper to modify the continuous approximation method of Griffith and Stewart to solve 0/1 problems and the first article to demonstrate the successful use of an Excel-based approach for solving very large CTAP problems.
Haibo Wang, Gary Kochenberger, and Yaquan Xu
International Journal of Mathematical Modelling and Numerical Optimisation Volume 1, Number 4, Pages: 344-351
In this note we report our success in applying CPLEX’s mixed integer quadratic programming (MIQP) solver to a set of standard quadratic knapsack test problems. The results we give show that this general purpose, commercial code outperformed a leading special purpose method reported in the literature by a wide margin.
Steven Walczak, Deborah Kellogg and Dawn Gregg
International Journal of Information Systems in the Service Sector, Vol. 2 Issue 4, October 2010, pp. 39-56.
Today’s purchase processes often require complex decision-making and consumers frequently use Web information sources to support these decisions. Increasing amounts of information, however, can make finding appropriate information problematic. This information overload, coupled with decision complexity, can increase the time to make a decision and reduce decision quality. This creates a need for tools that support these decision-making processes. Online tools that bring together data and partial solutions are one option to improve decision-making in complex, multi-criteria environments. An experiment using a prototype mashup application indicates that these types of applications may significantly decrease the time spent and improve the overall quality of complex retail decisions.
Syam, Siddhartha S. and Côté, Murray J
Omega, Vol. 38 Issue 3/4, pp. 157-166.
In this paper, we develop and solve a model for the location and allocation of specialized health care services such as traumatic brain injury (TBI) treatment. The model is based on and applied to one of the Department of Veterans Affairs’ integrated service networks. A cost minimization model with service proportion requirements is solved using simulated annealing. Large instances of the model with 100 candidate medical center locations and 15 open treatment units are solved in about 1000s. In order to test the real-world applicability of our model, an extensive managerial experiment is conducted using data derived from our health care setting. In this experiment, the effects of three critical factors: (1) degree of centralization of services, (2) the role of patient retention as a function of distance to a treatment unit, and (3) the geographic density of the patient population are investigated with respect to the important trade-off between the cost of providing service and the need to provide such service. Our analysis shows that all three factors of the experiment are both relevant and useful to decision-makers when selecting locations for their services.
Bahram Alidaee, Gary Kochenberger, and Haibo Wang
International Journal of Applied Metaheuristic Computing, Vol. 1 Issue 1, Pages 93-109
Modern metaheuristic methodologies rely on well defined neighborhood structures and efficient means for evaluating potential moves within these structures. Move mechanisms range in complexity from simple 1-flip procedures where binary variables are “flipped” one at a time, to more expensive, but more powerful, r-flip approaches where “r” variables are simultaneously flipped. These multi-exchange neighborhood search strategies have proven to be effective approaches for solving a variety of combinatorial optimization problems. In this article, we present a series of theorems based on partial derivatives that can be readily adopted to form the essential part of r-flip heuristic search methods for Pseudo-Boolean optimization. To illustrate the use of these results, we present preliminary results obtained from four simple heuristics designed to solve a set of Max 3-SAT problems.
Smith, Marlene A. and Bryant, Peter G.
American Statistician, Vol. 63 Issue 4, pp. 348-355
Case discussions have become an integral component of our business statistics courses. We have discovered that case discussion adds enormous benefits to the classroom and learning experience of our students even in a quantitatively based course like statistics. As we read about discussion-based methods, we discovered that the literature is mostly silent about the specific challenges of case teaching in statistics courses. This article is an attempt to fill that void. It provides a “how-to” starter’s guide for those interested in incorporating case discussions in statistics courses. It includes resources for background reading, tips on setting up a statistics case discussion course, and examples of four specific case discussions involving statistics topics. An illustrative case and instructor’s notes that can be used on the first day of class are provided as well. Because we have had mixed reactions to conducting case discussions online, we believe that the use of case discussion in distance education statistics courses is a fruitful area for experimentation and research. Although our experience is in the business statistics classroom, this article is also applicable to statistics courses in other disciplines.
Kellogg, Deborah L. and Smith, Marlene A.
Decision Sciences Journal of Innovative Education, Vol. 7 Issue 2, pp. 433-456
Faculty teaching in online environments are universally encouraged to incorporate a variety of student-to-student learning activities into their courses. Although there is a body of both theoretical and empirical work supporting this, adult professional students participating in an online MBA program at an urban business school reported being at best indifferent and often negative regarding these learning activities. A case study was performed to explore how pervasive this attitude was and the possible reasons for it. Through various sources of data and exploration, we discovered that common interactive modalities are not associated with either perceived learning or satisfaction. A content analysis of a data analysis course revealed that 64.5% of responses recalled student-to-student interactivities when responding to a “learned least from” query. We identified three possible reasons for these negative responses: time inefficiency, interaction dysfunction, and flexibility intrusion. We conclude that, although some working professional students probably do learn from student-to-student interactivity, the costs incurred may be too great. If working adult students present a different profile than those students typically represented in academic research and thus have different needs and expectations, we may need to rethink the design of online education delivered to them.
B. Alidaee, G.A. Kochenberger, K. Lewis, M. Lewis, and H. Wang
International Journal of Mathematics in Operational Research Vol. 1, Issue 1-2, p. 9 – 19
A common approach to many combinatorial problems is to model them as 0/1 linear programs. This approach enables the use of standard linear program-based optimisation methodologies that are widely employed by the operation research community. While this methodology has worked well for many problems, it can become problematic in cases where the linear programs generated become excessively large. In such cases, linear models can lose their computational viability. In recent years, several articles have explored the computational attractiveness of non-linear alternatives to the standard linear models typically adopted to represent such problems. In many cases, comparative computational testing yields results favouring the non-linear models by a wide margin. In this article, we summarise some of these successes in an effort to encourage a broader view of model construction than the conventional wisdom, i.e. linear modelling, typically affords.
Better, Marco, Glover, Fred, Kochenberger, Gary, And Wang, Haibo
International Journal of Information Technology & Decision Making Vol. 7, Issue 4, p. 571-587
Simulation optimization is providing solutions to important practical problems previously beyond reach. This paper explores how new approaches are significantly expanding the power of simulation optimization for managing risk. Recent advances in simulation optimization technology are leading to new opportunities to solve problems more effectively. Specifically, in applications involving risk and uncertainty, simulation optimization surpasses the capabilities of other optimization methods not only in the quality of solutions but also in their interpretability and practicality. In this paper, we demonstrate the advantages of using a simulation optimization approach to tackle risky decisions, by showcasing the methodology on two popular applications from the areas of finance and business process design.
Sarah Kovoor-Misra and Marlene A. Smith
The Journal of Applied Behavioral Science Vol. 44, Issue 4, p. 422-444
This study examines how the POIs of members of an online retail organization were affected after an acquisition. The authors find that (a) POI is more complex than previously understood, and continuity, change and confusion in POI can coexist. (b) The organizational change reactivated previously unresolved POI issues. (c) The structure of POI includes cognitive, affective, and behavioral dimensions, and changes occurred in these dimensions. (d) Top managers and employees who have more interactions with outsiders in their jobs tend to be more confused and make less POI change than employees who primarily deal with internal operations. Finally, (e) the image of the acquired organization and the change strategies used are triggers of POI confusion and/or change in the acquiring organization. This article highlights the experience of individuals in the acquiring organization and suggests that POI is an important lens for understanding and managing organizational changes.
Bahram Alidaee, Gary Kochenberger and Haibo Wang
Journal of Heuristics Vol. 14, Issue 6, p. 571-585
In a recent paper Glover (J. Heuristics 9:175–227, 2003) discussed a variety of surrogate constraint-based heuristics for solving optimization problems in graphs. The key ideas put forth in the paper were illustrated by giving specializations designed for certain covering and coloring problems. In particular, a family of methods designed for the maximum cardinality independent set problem was presented. In this paper we report on the efficiency and effectiveness of these methods based on considerable computational testing carried out on test problems from the literature as well as some new test problems.