[{"body":"Upcoming GAMS courses Below you find a list of upcoming GAMS courses. All of these are offered by our established partners, who have many years of GAMS modeling expertise.\n","excerpt":"\u003ch1 id=\"upcoming-gams-courses\"\u003eUpcoming GAMS courses\u003c/h1\u003e\n\u003cp\u003eBelow you find a list of upcoming GAMS courses. All of these are offered by our established partners, who have many years of GAMS modeling expertise.\u003c/p\u003e","ref":"/courses/","title":"Courses"},{"body":"General equilibrium theory and modeling are highly useful for understanding economic interactions and changes in the context of static and dynamic complex modern economies. Applied GEMs are widely used in the field of Macroeconomics to address a wide range of theoretical research questions, analyze, forecast, and simulate empirical/policy issues in response to exogenous shocks.\nM\u0026amp;S Research Hub provides full and comprehensive structured training for researchers, data analysts, and policy experts to acquire detailed knowledge and become fully capable of using these models in their research and policy-related analysis.\nModule Two: Computable General Equilibrium (CGE) provides full Training on the Theoretical Foundations of CGE Modeling, including Practical Labs Using GAMS.\nTraining content is systematic and structured as follows:\nModeling a Simple Economy\nModeling an Economy with Two Sectors\nModeling the Government\nModeling the Rest of the World\nIntroduction to Dynamic Models\nModeling a Simple Dynamic Model\nProvided by the M\u0026amp;S Research Hub Academic Council\nFor more information about the course timing and registration visit https://ms-researchhub.com/home/training/gem-training.html ","excerpt":"Provided by the M\u0026amp;S Research Hub Academic Council","ref":"/courses/2020_10_msresearchhub/","title":"Full Training on the Theoretical Foundations of CGE Modeling, including Practical Labs Using GAMS"},{"body":"Transform Your Optimization Career with the Power of GAMSPy Stop wrestling with 1980s syntax while your data science colleagues effortlessly build ML models in Python. You know optimization is elegant and powerful — but why does it feel so disconnected from the modern data ecosystem?\nGAMSPy changes everything. Finally, the proven GAMS engine (trusted for 40+ years) meets the Python syntax you already love. Build optimization models that seamlessly integrate with popular libraries — without sacrificing the raw performance that makes GAMS the gold standard.\nMaster 4 Complete Modules: Algebraic modeling basics to advanced\nAdvanced features incl. dynamic sets \u0026amp; model vectorization\nData Processing and ML integration\nMulti-objective optimization\nDeploying optimization solutions\nUnlike courses taught on rigid schedules by retirement-age instructors, who don\u0026rsquo;t want to bother with Python, this on-demand course lets you learn at your pace with Dr. Tim Varelmann — a digitally-native instructor who understands both modern Python workflows AND high-performance optimization.\nYour breakthrough moment awaits. Join a modern world where elegant mathematical modeling meets effortless Python development.\nBluebird Briefings: gamspy.bluebirdoptimization.com\nAcademic Discount Links:\nhttps://academic.gams.com https://license.gams.com/static/academic-verification-token.html https://academics.bluebirdoptimization.com ","excerpt":"Stop wrestling with 1980s syntax while your data science colleagues effortlessly build ML models in Python. You know optimization is elegant and powerful — but why does it feel so disconnected from the modern data ecosystem? GAMSPy changes everything.","ref":"/courses/2025_10_varelmann-gamspy-course/","title":"Effortless Modeling in Python with GAMSPy"},{"body":"In Energy and Power System Optimization in GAMS course you will learn:\nHow to formulate your problem and implement it in GAMS and make optimal decisions in your real-life problems\nHow to code efficiently, get familiarised with the techniques that will make your code scalable for large problems\nHow to design an action block with a clearly defined conversion goal\nHow to run sensitivity analysis in GAMS to predict the outcome of a decision if a situation turns out to be different compared to the key predictions.\nFor your convenience the course is broken into two sections :\nGeneral GAMS coding (Pure GAMS, elements, loops, multi-objectives, conditional statements, Examples)\nPower system GAMS coding (Static/dynamic economic/environmental dispatch, AC/DC Optimal Power Flow (OPF), Storage modeling, demand response, Power system observability, \u0026hellip;)\nYou will be walked through every step of GAMS coding with real-life case studies, actual experiments, and multiple examples from around different disciplines.\nBy the end of this course, you\u0026rsquo;ll be able to Code your own optimization problem in GAMS.\n(with Dr. Alireza Soroudi )\n","excerpt":"\u003cp\u003eIn \u003cstrong\u003eEnergy and Power System Optimization in GAMS\u003c/strong\u003e course you will learn:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\n\u003cp\u003e\u003cem\u003e\u003cstrong\u003eHow to formulate your problem and implement it in GAMS\u003c/strong\u003e\u003c/em\u003e and make optimal decisions in your real-life problems\u003c/p\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003cp\u003eHow to code efficiently, get familiarised with the techniques that will \u003cem\u003e\u003cstrong\u003emake your code scalable for large problems\u003c/strong\u003e\u003c/em\u003e\u003c/p\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003cp\u003eHow to design an \u003cem\u003e\u003cstrong\u003eaction block with a clearly defined conversion goal\u003c/strong\u003e\u003c/em\u003e\u003c/p\u003e","ref":"/courses/2020_0x_energy_and_power_system_optimization_soroudi_t/","title":"Energy and Power System Optimization in GAMS"},{"body":"with Florence Jacquet (ModelEco)\nLanguage: French\n","excerpt":"with Florence Jacquet (ModelEco)","ref":"/courses/2020_0x_modeleco_jacquet_t/","title":"Modelisation in mathematical programming for the economic analysis of agriculture"},{"body":"The art of decision making and finding the optimal solution to a problem is getting more and more attention in recent years. In this course, you will learn how to deal with various types of mathematical optimization problems as below:\nLinear Programming (LP)\nMixed Integer Linear Programming (MILP)\nNon-Linear Programming\nMixed Integer Non-Linear Programming\nMulti-Objective Optimization\nWe start from the beginning that you need to formulate a problem. Therefore, after finishing this course, you will be able to find and formulate decision variables, objective function, constraints and define your parameters. Moreover, you will learn how to develop the model that you formulated in the GAMS environment. Using GAMS, you will learn how to:\nDefine Sets, Parameters, Scalars, Objective Function \u0026amp; Constraints\nImport and read data from an external source (Excel file)\nSolve the optimization problem using various solvers such as CPLEX, IPOPT, COUENNE, BONMIN, \u0026hellip;\nCreate a report from your result in GAMS results\nExport your results into an external source (Excel file)\nDeal with multi-objective problems and solve them using GAMS solvers\nIn this course, we solve simple to complex optimization examples from engineering, production management, scheduling, transportation, supply chain, and \u0026hellip; areas.\nThis course is structured based on 3 examples for each of the main mathematical programming sections. In the first two examples, you will learn how to deal with that type of specific problem. Then you will be asked to challenge yourself by developing the challenge problem into GAMS. However, even the challenge problem will be explained and solved with details.\nWho this course is for:\nStudents in all levels (Undergrad, Grad and PhD) Professionals in Various disciplines such as Engineering, Management and Operation Research Companies Who Wants to Use Optimization in Their Businesses Anyone Who is Interested to Learn Optimization! There is no prerequisites since this course is designed for complete beginners to mathematical optimization and I start from downloading and installing GAMS and prepare students for the course.\nwith your instructor Navid Shirzadi, Ph.D. / Data Analyst - Optimization Expert\n","excerpt":"Learn Mathematical Optimization and Operation Research, Linear \u0026amp; Non-Linear Programming, Multi-objective Optimization\u0026hellip;","ref":"/courses/2023_0x_optimization-with-gams_on-demand/","title":"Optimization with GAMS - Operations Research Bootcamp A-Z"},{"body":"GAMS Anytime Courses \u0026amp; Workshops with Josef Kallrath (Scientific Consultant, Weisenheim am Berg, Germany \u0026amp; University of Florida, Gainesville, FL)\nWith this new concept we allow you to freely select content, duration \u0026amp; time, location, mode, and type:\nContent:\nYou can specify blocks of content from the standard physical courses (find Examples on the Registrationform), but you can express wishes for extra blocks and topics, among them, for instance, modeling and optimization under uncertainty, modeling suppy network problems, optimization in metals, paper or process industry. Free content also allows you weighting the focus of GAMS language \u0026amp; system, modeling and optimization algorithms and solver. Within GAMS language \u0026amp; system, the focus could be on developing stable industrial applications, modularity within GAMS, GAMS Miro, or interfacing to other packages.\nDuration and time:\nDuration varies between 2 and 5 days. Time to be agreed on.\nLocation:\nIt can be\nat your company or institutional location (you organize), any reasonable meeting hotel (you organize), or Hotel Speeter in Weisenheim am Berg, Germany, only one hour away from Frankfurt airport. (http://www.hotel-speeter.de ). Mode:\nPhysical events and hybrid (buy the event material and contract a specified number of (individual) online support hours. We do not offer full online courses.\nType:\nCourse, a workshop, or a mixture of both. Courses have a rather strict course structure (as my standard courses), while a workshop offers more flexibility for instance, to discuss and analyze your model or optimization problem at hand, to screen your model or to tune your model formulation to show better performance.\nFor details (duration \u0026amp; time) and initializing an event contact me via JosefKallrath.SC@sci-con.de Homepage: https://josefkallrath.github.io More information and registration ","excerpt":"Josef Kallrath (Scientific Consultant, Weisenheim am Berg, Germany \u0026amp; University of Florida, Gainesville, FL) now offers a new and additional course which allows you to freely select content, duration \u0026amp; time, location, mode, and type for your personalized GAMS Course.","ref":"/courses/2023_11_modeling-with-gams_ondemand_kallrath/","title":"Modeling and Optimization with GAMS"},{"body":"Do you want to develop and implement your own mathematical models in-house to strengthen your competitiveness? Turn Math into Profit with GAMS Modeling For organizations that want to build or extend their expertise in optimization modeling with GAMS by a dedicated course or learning on the fly. Did you know that you can save up to 30% of cost and additional time by deploying optimization models, apart from gaining unique insight into your organization\u0026rsquo;s data?\nWe offer GAMS courses with practical relevance. Request a course tailored to your company, which can be conducted in-house, online, or on the fly as training-on-the-job.\nThese courses are aimed at organizations that want to\nreduce costs get more efficient solutions gain more clarity about their processes and widen their in-house expertise Let us work together to deepen the skills of your team members in the field of mathematical modeling and optimization.\nClick here to get more details on our training offering. Please feel free to contact us directly via e-mail at info@sdb.ltd .\nWe are happy to hear from you!\nWith kind regards and a day as you visualize it,\nDr. Thomas Maindl\nSDB Science-driven Business Ltd\n","excerpt":"In this Advanced Analytics - Modeling and Optimization with GAMS courses we build or extend your expertise in optimization modeling with GAMS by dedicated courses or learning on the fly.","ref":"/courses/2022-06_sdb_online/","title":"Turn Math into Profit with GAMS Modeling"},{"body":"Stay tuned for Updates!\n","excerpt":"The 2026 INFORMS Annual Meeting will be held in San Francisco, California. Stay tuned for updates.","ref":"/conferences/2026-10-informs_am/","title":"2026 Informs Annual Meeting"},{"body":"Announcement Due to the corona virus situation, all further conferences will be cancelled. Please remember to wear personal protection equipment when calling GAMS or sending emails.\n","excerpt":"\u003ch3 id=\"announcement\"\u003eAnnouncement\u003c/h3\u003e\n\u003cp\u003eDue to the corona virus situation, all further conferences will be cancelled. Please remember to wear personal protection equipment when calling GAMS or sending emails.\u003c/p\u003e","ref":"/conferences/","title":"Conferences"},{"body":"Stay tuned for Updates!\nSharpen your skills, broaden your horizons and accelerate your career at Analytics+, where data meets decision-making. Network and swap ideas with the most accomplished professionals in the field of advanced analytics.\nJoin us April 12 – 14, 2026 at the Gaylord National Resort \u0026amp; Convention Center located in National Harbor, MD.\n","excerpt":"Sharpen your skills, broaden your horizons and accelerate your career at Analytics+, where data meets decision-making. Network and swap ideas with the most accomplished professionals in the field of advanced analytics.","ref":"/conferences/2026-04-informs_ba/","title":"2026 Business Analytics+ Conference"},{"body":"Stay tuned for Updates!\n","excerpt":"The OR 2026 will take place in Passau!","ref":"/conferences/2026-09-or_passau/","title":"OR 2026"},{"body":"Stay tuned for Updates!\n","excerpt":"The EURO 2026 will take place in Athen, Greece! More info coming soon.","ref":"/conferences/2026-09-euro_athen/","title":"EURO 2026"},{"body":"22nd Workshop on Economic Modeling: Computable General Equilibrium Analysis of International Trade and Tariff Disruptions using GTAPinGAMS and MPSGE\nInstructors:\nProf. Edward J. Balistreri\nProf. Christoph Böhringer\nAss. Prof. Casiano Manrique\nResponses to global challenges such as international trade conflicts or climate change should be based on the systematic impact assessment of alternative policy options. The economic analysis of policies affecting markets in multiple countries requires both data and theory. We provide computational tools developed in the GAMS modeling language to extract the GTAP database of the global economy. We use empirical GTAP data for computable general equilibrium (CGE) analysis facilitated by MPSGE as a meta-language to implement CGE models in a compact non-algebraic manner. The workshop will demonstrate the practical usefulness of CGE analysis by means of policy-relevant applications to carbon tariffs in climate policy, disruptive trade policies, and supply chain shocks The explicit algebraic formulation of general equilibrium conditions and the parameterization of functional forms to characterize technologies and preferences can become very tedious and error-prone, in particular for more complex production and consumption patterns. MPSGE (Mathematical Programming System for General Equilibrium) provides a short-hand non-algebraic representation for general equilibrium models releasing economists from the need to write down complicated equilibrium conditions explicitly as well as from the need to set up tedious calibration routines for the parameterization of demand and supply functions.\nThe workshop will show in detail how to transform algebraic CGE models into non-algebraic MPSGE syntax which can substantially lower the entry barriers and time cost of CGE analysis n both – algebraic and non-algebraic – cases, CGE models are stated as mixed complementarity problems (MCP) which link equilibrium conditions as nonlinear inequalities with complementary non-negative economic variables. The fundamental strength of CGE models implemented as MCP is the ability to handle corner solutions and regime shifts that might be central to the analysis of discrete production decisions (e.g. firm location) or the selection of international value chains (e.g. switching of trade links).\nThe workshop will consist of five segments: Part 1: Economic Equilibrium and Mixed Complementarity Problems (MCP)\nPart 2: Empirical trade theories and data management using GTAPinGAMS\nPart 3: Standard CGE trade models for policy analysis\nPart 4: Advanced trade structures and large-scale applications\nPart 5: Advanced trade policy applications and exercises\n(Advanced trade structures: including Krugman (1980), Melitz (2003) and Bilateral Representative Firms (BRF); industrial carbon tariffs, optimal tariffs and trade wars, CO2 emissions embodied in bilateral trade, and trade models for CBAM analysis, are among the specific topics to be covered)\nEnrolment is limited to 15 participants to ensure efficient, close interaction. For further information, please visit https://wem.ulpgc.es/ , or check the \u0026lsquo;Take a GAMS course\u0026rsquo; section at / .\nhttp://www.ulpgc.es/webs/wem/ The registration deadline is March, 13, 2026.\n","excerpt":"The workshop will show in detail how to transform algebraic CGE models into non-algebraic MPSGE syntax which can substantially lower the entry barriers and time cost of CGE analysis in both – algebraic and non-algebraic – cases\u0026hellip;","ref":"/courses/2026_03_22th-wem/","title":"22nd Workshop on Economic Modeling - Computable General Equilibrium Analysis of International Trade and Tariff Disruptions using GTAPinGAMS and MPSGE"},{"body":"with Josef Kallrath (University of Florida, Gainesville, FL)\nThis two-days course helps the mathematically inclined participants to learn advanced techniques for better using GAMS to model and solve larger or complicated optimization problems, especially, mixed integer optimization problems. The participants will increase their knowledge on using GAMS efficiently and will learn more about procedural and modular language features, background on the solvers embedded in GAMS, how to interface to systems outside GAMS and how to use and create Function Libraries. The course assumes the participants to have some basic knowledge on GAMS and familiarity with the GAMS-IDE or GAMS Studio. For the mathematical part of this course, it is beneficial for participants to have a decent mathematical background.\nThe participants will learn more about the MILP, NLP and MINLP solvers as well as on global optimization techniques. We stress that difficult and large optimization problems require a tight connection between modeling and algorithms aspects. This leads to sequence of models, nested solve statements, and decomposition techniques – detailed examples will be discussed. An important aspect of the course is the development of industrial applications software. The course will provide tricks-of-the-trade not covered by the GAMS documentation or other public sources.\nAs a new addition to the course: we will be looking at how ChatGPT/CoPilot could be used for the generation or analysis of GAMS code.\nThe course offers ample opportunity for discussion and analysis of participants\u0026rsquo; own problems, in addition to the presentation, examples, and hands-on activities.\nEarly-registration discount until Sep 27, 2025\nMore information and registration Contact E-Mail: JosefKallrath.SC@sci-con.de Homepage: https://josefkallrath.github.io ","excerpt":"with Josef Kallrath (University of Florida, Gainesville, FL). This two-days course helps the mathematically inclined participants to learn advanced techniques for better using GAMS to model and solve larger or complicated optimization problems (MILP, NLP, MINLP) and to learn about modular structures and building complex applications.","ref":"/courses/2025_11_modeling-with-gamsadvanced_kallrath/","title":"Modeling and Optimization with GAMS (advanced)"},{"body":"","excerpt":"","ref":"/authors/aalqershi/","title":"Ahmed Alqershi"},{"body":"","excerpt":"","ref":"/authors/busul/","title":"Burak Usul"},{"body":"","excerpt":"","ref":"/categories/gamspy/","title":"Gamspy"},{"body":"","excerpt":"","ref":"/authors/mbussieck/","title":"Michael Bussieck"},{"body":"","excerpt":"","ref":"/authors/msoyturk/","title":"Muhammet Soytürk"},{"body":"As artificial intelligence becomes embedded in our phones, computers, and cars, we’re learning that its decision-making can be surprisingly, and worryingly, fragile. In computer vision, for example, nudging the input by adding noise to just a handful of pixels can flip a confident prediction. And what makes it worse is that these systems often feel like black boxes; input goes in, output comes out, and the path in between is hard to see, and harder to evaluate, mainly because the decision process is buried inside millions of learned weights.\nThat is why evaluating Neural Network\u0026rsquo;s robustness matters. If small, realistic changes can break a model, we need to know. A practical way to test this is with adversarial attacks; inputs that are intentionally perturbed to push the model towards a wrong decision. While \u0026ldquo;attacks\u0026rdquo; sound scary (and can be), they are also a powerful diagnostic tool that allows us to gain evidence in our model\u0026rsquo;s robustness if it withstands many targeted perturbations. On the other hand, if the model fails, we learn exactly where and how it failed and we try to make it more robust.\nFigure 1. Small, targeted pixel changes can swing a model’s prediction, even when the image still looks the same to us.\nWhat’s striking is how subtle these changes are to the human eye. The perturbed digits still read clearly, yet the model’s internal confidence shifts enough to cross a decision boundary. This gap between human perception and model behavior is exactly what we aim to measure and reduce through adversarial robustness testing.\nWhy does robustness testing matter? To move from intuition to something we can actually test, using MNIST as our running example, let’s formalize the robustness question that adversarial attacks try to answer:\nGiven a clean image $x$ with true label $y$, is there a nearby image $x\u0026rsquo;$ (within a small budget $\\varepsilon$) that makes the model prefer some label $\\hat{y}\\neq y$?\nWe will call $x\u0026rsquo;$ \u0026ldquo;nearby\u0026rdquo; if, for a chosen norm $\\lVert\\cdot\\rVert$, the distance from $x$ stays within the budget: $$ \\lVert x\u0026rsquo; - x \\rVert \\le \\varepsilon $$ For images, common choices are the $\\ell_\\infty$ or $\\ell_2$ norms (i.e., $\\lVert x\u0026rsquo; - x \\rVert_\\infty \\le \\varepsilon$ or $\\lVert x\u0026rsquo; - x \\rVert_2 \\le \\varepsilon$). If no such $x\u0026rsquo;$ exists, the model is robust to that input at the chosen budget; if one does exist, we’ve found a weak spot.\nThis approach is valuable because it targets the worst case. Random noise rarely crosses a decision boundary; adversarial noise is crafted precisely to do so. It’s also model-aware: with white-box access we can use gradients or architecture details to search efficiently; with black-box access we can still probe via queries.\nWhat we can’t do is try perturbations by hand. Even a $28\\times 28$ MNIST image has $784$ pixels, if you allow each just a few tiny adjustments and the possibilities explode beyond anything we could enumerate. We need a principled way to traverse that space and home in on the most damaging allowed change.\nThat’s where optimization comes in. In the next section, we’ll turn the question above into a compact mathematical program and show how GAMSPy makes it straightforward to implement and solve—starting with a small MNIST classifier and an objective that directly measures how easily we can push the model off the correct label.\nFrom intuition to an optimization model with GAMSPy Brute-forcing \u0026ldquo;tiny pixel nudges\u0026rdquo; is a non-starter: a single 28×28 image already lives in a 784-dimensional space. Instead, we formulate the attack as an optimization problem and let a solver search that space intelligently.\nThe idea is simple:\nStart from a correctly classified image $x$. Add a bounded perturbation $n$ (our decision variable) with $\\lVert n \\rVert_\\infty \\le \\epsilon$. Pass the perturbed and normalized image through the fixed network. Minimize the margin between the score of the correct class and a chosen wrong class: $$ \\min\\ y_{right} - y_{wrong}. $$\nIf the optimum is negative, we found an adversarial example (the wrong class beats the right one). If it’s positive, the model appears robust for that image and budget.\nWith GAMSPy this is pleasantly direct. Below are the essential pieces from the experiment script we used for this post.\nm = gp.Container() network = build_network(hidden_layers, hidden_layer_neurons) single_image, single_target = get_image(network) # 1) Parameters \u0026amp; decision variables image_data = single_image.numpy().reshape(784) image = gp.Parameter(m, name=\u0026#34;image\u0026#34;, domain=dim(image_data.shape), records=image_data) noise = gp.Variable(m, name=\u0026#34;noise\u0026#34;, domain=dim([784])) a1 = gp.Variable(m, name=\u0026#34;a1\u0026#34;, domain=dim([784])) # 2) Bounds: attack budget in pixel space, and valid box after normalization MNIST_NOISE_BOUND = 0.1 # This is ε, the higher it is, the noisier the image can be MEAN, STD = (0.1307,), (0.3081,) noise.lo[...] = -MNIST_NOISE_BOUND noise.up[...] = MNIST_NOISE_BOUND a1.lo[...] = -MEAN[0] / STD[0] a1.up[...] = (1 - MEAN[0]) / STD[0] # Link normalized input to (image + noise) set_a1 = gp.Equation(m, \u0026#34;set_a1\u0026#34;, domain=dim(a1.shape)) set_a1[...] = a1 == (image + noise - MEAN[0]) / STD[0] # 3) Embed the trained PyTorch network as equations seq_formulation = gp.formulations.TorchSequential(m, network) y, _ = seq_formulation(a1) # y are the logits/outputs # 4) Choose the target wrong label = runner-up before perturbation output_np = network(single_image.unsqueeze(0)).detach().numpy()[0][0] right_label = np.argsort(output_np)[-1] wrong_label = np.argsort(output_np)[-2] # 5) Objective: minimize the right-vs-wrong margin obj = gp.Variable(m, name=\u0026#34;z\u0026#34;) margin = gp.Equation(m, \u0026#34;margin\u0026#34;) margin[...] = obj[...] == y[f\u0026#34;{right_label}\u0026#34;] - y[f\u0026#34;{wrong_label}\u0026#34;] # 6) Solve as a MIP model = gp.Model( m, \u0026#34;min_noise\u0026#34;, equations=m.getEquations(), objective=obj, sense=\u0026#34;min\u0026#34;, problem=\u0026#34;MIP\u0026#34;, ) model.solve(options=gp.Options.fromGams({\u0026#34;reslim\u0026#34;: 4000})) A few notes that make this practical:\nNetwork as algebra: TorchSequential lifts a trained PyTorch model into GAMSPy variables and equations, so the solver can reason over the network exactly like any other optimization model.\nRunner-up target: We set wrong_label to the second highest confidence before adding noise. This often yields a strong, targeted attack with minimal perturbation. Therefore, being robust against this is a good sign.\nInterpretation: After solving, check one of two outcomes of the objective value: (i) Negative ⇒ attack succeeded (not robust for this image at this budget). (ii) Positive ⇒ the model resisted this attempt.\nHelper functions: The full code (linked at the end) includes utilities to build the network, load data, and run multiple experiments systematically.\nIf you would like a slower, step-by-step walkthrough of embedding neural networks and writing these constraints, the GAMSPy Docs have a hands-on tutorial.\nReLU as MIP: exact but combinatorial ReLU is wonderfully simple in code:\n$$\\mathrm{ReLU}(s) = \\max(0, s).$$\nBut \u0026ldquo;max\u0026rdquo; is piecewise linear, so to represent it exactly in a linear optimization model we usually introduce a binary variable $z$ that says which side of the kink we are on (active or inactive). A standard formulation looks like this (for a neuron with pre-activation $s$ and output $y$):\n\\begin{aligned} y \u0026amp;\\ge s, \\quad y \\ge 0, \\quad \\ y \u0026amp;\\le s - L(1 - z), \\quad\\ y \u0026amp;\\le Uz, \\qquad z \\in {0, 1}. \\end{aligned}\nHere $L$ and $U$ are valid lower/upper bounds on $s$ (the famous \u0026ldquo;big-M\u0026rdquo; constants). This is the formulation that most exact robustness verifiers use, and it’s also what you get by default when you embed a PyTorch model with GAMSPy and declare the problem type as MIP, exactly what we did in the code in the previous section:\nmodel = gp.Model( m, \u0026#34;min_noise\u0026#34;, equations=m.getEquations(), objective=obj, sense=\u0026#34;min\u0026#34;, problem=\u0026#34;MIP\u0026#34;, # ReLU -\u0026gt; binaries ) model.solve(solver=\u0026#34;cplex\u0026#34;) However, this MIP formulation has some well-known challenges:\nOne binary per ReLU: If your network has $L$ hidden layers with $H$ ReLU neurons each, you already have roughly $L \\times H$ binaries (and $\\sim 4$ linear constraints) just for the activations. A modest MIP with $3 \\times 512$ hidden units means $\\sim 1{,}536$ binaries before you’ve modeled anything else.\nBranch-and-bound combinatorics: MIP solvers explore a tree that branches on these binaries. In the worst case that’s exponential. Good solvers prune aggressively, but as the count grows, so does the search.\nIn short: the traditional MIP encoding is exact and auditable, but you pay with combinatorial search. For educative models (like the small MNIST MIP we’re using) it works fine; for larger or deeper networks, solve times can skyrocket.\nThis is precisely why we looked for an alternative. In the next section we’ll swap those binaries for complementarity conditions, turning the problem into a smooth NLP that solves much faster in practice—trading global optimality guarantees for speed.\nReLU via complementarity: an NLP alternative If binaries are the reason MIPs get heavy, the obvious question is: can we model ReLU without introducing binary variables? Yes, by switching to a complementarity view of the activation and solving the resulting model as a Nonlinear Program (NLP).\nReLU as complementarity For a neuron with pre-activation $t$ and output $y = \\max(0, t)$, we can write\n\\begin{aligned} y \u0026amp;\\ge 0, \\quad y - t \\ge 0, \\quad \\ y \\cdot (y - t) = 0. \\end{aligned}\nWhere exactly one of $y$ and $y - t$ is positive at the solution, capturing the \u0026ldquo;on/off\u0026rdquo; logic without any binary variable. This turns the verification problem from a MIP into an MPCC (a mathematical program with complementarity constraints), which we can tackle efficiently with an NLP solver.\nThe small changes in code In GAMSPy, the swap is literally a couple of lines. We tell the TorchSequential wrapper to encode every ReLU with the complementarity formulation and we change the model\u0026rsquo;s type (NLP) and solver (CONOPT):\ndef convert_relu(m: gp.Container, layer: torch.nn.ReLU): return gp.math.relu_with_complementarity_var # Tell TorchSequential to use that for all ReLUs change = {\u0026#34;ReLU\u0026#34;: convert_relu} seq_formulation = gp.formulations.TorchSequential(m, network, change) y, _ = seq_formulation(a1) # Same objective as before: minimize the right-vs-wrong margin obj = gp.Variable(m, name=\u0026#34;z\u0026#34;) margin = gp.Equation(m, \u0026#34;margin\u0026#34;) margin[...] = obj[...] == y[f\u0026#34;{right_label}\u0026#34;] - y[f\u0026#34;{wrong_label}\u0026#34;] # Build an NLP and solve it with CONOPT model = gp.Model( m, \u0026#34;min_noise\u0026#34;, equations=m.getEquations(), objective=obj, sense=\u0026#34;min\u0026#34;, problem=\u0026#34;NLP\u0026#34;, ) model.solve(solver=\u0026#34;conopt\u0026#34;) Everything else, the perturbation budget, normalization, runner-up target label, and interpretation of the objective, stays the same as in the MIP code.\nThe complementarity encoding is built into GAMSPy\u0026rsquo;s convinience gp.math.relu_with_complementarity_var() function, so you don’t need to build the equations manually. Additionally, CONOPT plugs in seamlessly via solver=\u0026quot;conopt\u0026quot; which is a robust choice for these NLPs and freely available to GAMSPy Academic users.\nHow does this help? And what do we give up? Speed: Removing thousands of binaries collapses the branch-and-bound tree into a smooth (though nonconvex) search. In practice, our runs on MNIST-scale MIPs and small CNNs are much faster with the NLP route. We’ll show numbers in the upcoming section.\nTradeoff: There is no free lunch! An NLP solver like CONOPT does not guarantee global optimality. You may land on a local minimum with a positive margin even if a negative (successful attack) exists. That’s the price we pay for speed.\nThis is where strategy matters: if the NLP returns a negative objective, you found a valid adversarial example, period. On the other hand, if it returns a positive objective, we can not be certain the model is robust; we may have just landed in a local minimum. To improve our confidence, we can try multiple starting points of the NLP model (as we will see in section Speed as a strategy).\nThe trade-off in practice: accuracy vs. speed Now for the fun part! Here we compare the two formulations head-to-head on a suite of MNIST classifiers, measuring solve time, agreement on the robustness verdict, and false positives (cases where NLP says \u0026ldquo;robust\u0026rdquo; but MIP found an attack). The models are ranged from 1 to 5 hidden layers with 10 to 60 neurons each, using a runner-up targeted attack with an $\\ell_\\infty$ noise budget of $0.1$, keeping the same image, labels, and bounds across both runs.\nThe two configurations are:\nMIP: ReLU with binaries and problem=\u0026quot;MIP\u0026quot;. NLP: ReLU via complementarity and problem=\u0026quot;NLP\u0026quot; solved with CONOPT. Both used reslim = 4000 seconds; when the MIP hits that limit, it returns the best bound found so far. Below, we summarize the three things you probably care about most: speed, agreement on the robustness verdict (sign of the margin), and false positives (cases where NLP says \u0026ldquo;robust\u0026rdquo; but MIP found an attack).\nSpeed Across the 30 architecture pairs:\nMedian solve time: 12.12s for MIP vs 0.227s for NLP. Mean solve time: 1004.042s for MIP vs 0.413s for NLP. NLP was slower only $1/30$ times (a tiny $0.13\\text{s}$ vs $0.16\\text{s}$ on the smallest network). MIP timeouts: $7/30$ runs hit the $4000\\text{s}$ limit (depths $3$–$5$) while all NLP runs finished in under $1.63\\text{s}$. The figure below visualizes the solve times of both methods. Each orange–blue pair corresponds to one architecture (hidden layers × neurons per layer). On the log-scale axis, the vertical gap between each pair is the per-instance speedup: as we move right, networks get wider/deeper, that gap widens dramatically. Even where tiny models are close, MIP escalates by orders of magnitude while the NLP track stays clustered near the bottom axis.\nFigure 2. MIP vs NLP solve times per architecture.\nNote: the MIP models are also larger on average due to the additional binary variables.\nVerdict agreement We compare the sign of the optimal objective (negative $\\Rightarrow$ attack found; positive $\\Rightarrow$ \u0026ldquo;robust\u0026rdquo; at this budget):\nExact equality of margins: $12/30$ pairs matched exactly to the $5^{\\text{th}}$ decimal place. Agreement: $26/30$ pairs agreed on the sign of the margin (both found an attack or both said \u0026ldquo;robust\u0026rdquo;). Disagreements: $4/30$; in 3 of these, MIP hit the time limit without proving optimality and returned a positive bound, while NLP found a negative margin (successful attack). In the last disagreement, MIP returned a negative margin while NLP returned a positive one (false positive). False positives We observed $1/30$ false positive from architecture with $4$ layers and $40$ neurons per layer. Here, MIP found an attack with margin $-1.96$ but NLP returned a positive margin of $+5.49$. This highlights the trade-off: while NLP is fast (This specific run took MIP 4000s, hitting the time limit, while NLP ran for 0.342s), it can miss attacks that MIP finds. To mitigate this, one could run NLP from multiple starting points or use it as a quick filter before applying MIP for confirmation as we discuss next.\nSummary table Below is the detailed results table for all architecture pairs tested for this experiment. As mentioned earlier in the post, the objective function of the study is to minimize margins between the correct and runner-up labels under an $\\ell_\\infty$ perturbation budget of $0.1$ on a fixed MNIST image. Therefore, negative objective values indicate successful adversarial attacks, while positive values suggest robustness at this budget. What we try to study here is the agreement between MIP and NLP formulations in terms of both objective values and robustness verdicts, along with their respective solve times. Furthermore, the \u0026ldquo;False Robustness\u0026rdquo; column flags cases where the NLP formulation incorrectly indicates robustness (positive margin) while the MIP formulation finds an adversarial attack (negative margin). Although there are some cases where the NLP finds stronger attacks (more negative margins) than MIP, those cases are essentially when MIP hits the time limit and returns a suboptimal positive bound (as the MIP solver guarantees global optimality if given enough time).\nHidden Layers Neurons / Layer MIP Objective Value MIP Solve Time (s) MIP Variable Count NLP Objective Value NLP Solve Time (s) NLP Variable Count False Robustness 1 10 3.88899 0.130 1609 5.092 0.160 1599 1 20 0.49645 0.190 1639 1.04024 0.081 1619 1 30 -0.64841 0.160 1669 -0.64841 0.099 1639 1 40 -0.32195 0.247 1699 -0.31491 0.138 1659 1 50 -1.35391 0.429 1729 -1.16102 0.142 1679 1 60 -1.76911 0.455 1759 -1.76911 0.170 1699 2 10 -3.14897 0.143 1639 -3.04977 0.068 1619 2 20 1.95283 0.196 1699 1.95283 0.193 1659 2 30 -1.22023 1.319 1759 -1.22023 0.168 1699 2 40 -6.33439 1.881 1819 -6.33439 0.173 1739 2 50 -3.47329 14.177 1879 -3.47192 0.310 1779 2 60 -6.79947 60.539 1939 -6.77727 0.311 1819 3 10 -3.04560 0.154 1669 -3.04560 0.083 1639 3 20 -2.28455 4.013 1759 -2.28455 0.152 1699 3 30 -0.51342 10.063 1849 -0.39979 0.221 1759 3 40 -2.26477 123.597 1939 -2.23418 0.269 1819 3 50 -0.81301 484.237 2029 -0.80001 0.628 1879 3 60 -6.67975 4000.461 2119 -6.63047 0.920 1939 4 10 -9.20940 0.295 1699 -9.20940 0.194 1659 4 20 -5.61790 96.590 1819 -5.39378 0.517 1739 4 30 -6.18924 781.783 1939 -6.18924 0.488 1819 4 40 -1.96277 4000.354 2059 5.48993 0.342 1899 Yes 4 50 -7.70052 4000.710 2179 -7.70052 0.739 2019 4 60 7.55647 4000.487 2299 -1.81861 1.230 2059 5 10 -5.78419 1.038 1729 -5.78419 0.190 1679 5 20 -13.40505 15.089 1879 -13.36193 0.232 1779 5 30 -4.55439 520.764 2029 -4.55439 0.451 1879 5 40 -1.39833 4000.662 2179 -4.98835 1.177 1979 5 50 14.42331 4000.413 2329 -1.78056 0.916 2079 5 60 5.55635 4000.689 2479 -3.79127 1.632 2179 Speed as a strategy The punchline from the previous section is that the NLP formulation is so much faster than the MIP one, often by orders of magnitude. This speed opens up new strategies to increase our confidence in the robustness verdict, as we afford to run it dozens, hundreds, and even thousands of times. That speed lets us attack the big practical question: If NLP doesn’t guarantee global optimality, how do we raise our confidence that a positive margin isn’t just a local minimum?\nOne strategy is to run the same NLP model repeatedly from different initial solutions. If any run finds a negative objective, we’ve discovered an adversarial example and can declare the network not robust for that image and budget with certainty. If all runs return positive objectives, we can report that no attack found after N diverse starts, which is a much stronger statement than a single start.\nHow to initialize multiple NLP runs? The entire model is driven by the perturbation vector noise. Everything else (normalized input a1, layer outputs, logits y, and the objective) depends on noise via the constraint:\nset_a1[...] = a1 == (image + noise - MEAN[0]) / STD[0] So it’s enough to set an initial level for noise. GAMSPy exposes this as the variable’s level values:\nnoise_init = \u0026lt;some numpy array of shape (784,)\u0026gt; noise_vals = gp.Parameter(m, name=\u0026#34;noise_vals\u0026#34;, domain=noise.domain, records=noise_init) noise.l[...] = noise_vals[...] # \u0026lt;-- initial solution Generating diverse starts with Sobol Setting noise_init to different values gives different starting points for the NLP solver. And to make use of the speed advantage wisely, we want these starting points to be as diverse as possible within the allowed perturbation box $[-\\epsilon, \\epsilon]^{784}$. That way, we explore different regions of the search space and reduce the chance of missing an attack. We can achieve this using low-discrepancy sequences, which are designed to fill a space evenly. An excellent choice is the Sobol sequence and is implemented in scipy.stats.qmc :\nfrom scipy.stats import qmc sampler = qmc.Sobol(d=784) samples = sampler.random_base2(m=10) # 2^10 = 1024 starts scaled = qmc.scale( samples, l_bounds=[-MNIST_NOISE_BOUND]*784, u_bounds=[ MNIST_NOISE_BOUND]*784, ) for sample in scaled: build_ad_attack_model(hidden_layers, neurons, mip=False, noise_init=sample) Early stopping when an attack is found Since any negative margin proves non-robustness, you can stop the loop as soon as you hit one. We can achieve this by slightly modifying the build_ad_attack_model(...) function in our code to accept an optional noise_init argument and return the objective value after solving. Then, in the loop above, we check the returned margin and break if it’s negative.\nfor sample in scaled: margin_value = build_ad_attack_model(hidden_layers, neurons, mip=False, noise_init=sample) if margin_value \u0026lt; 0: print(\u0026#34;Adversarial example found!\u0026#34;) break Revisiting the \u0026ldquo;false positive\u0026rdquo;. Can multi-start NLP fix it? Remember the lone false positive from the previous section (the $4\\times 40$ network), where the NLP returned misleading $+5.48993$ margin while the MIP run found $-1.96277$? This is the moment of truth for our multi-start strategy. We can demonstrate its effectiveness by re-running the NLP with Sobol multi-starts and early stopping as described above. The loop stops when the MIP-found attack is rediscovered, this could take just a few or many runs depending on luck and the distribution of local minima. But, here is what happened:\nRun #2 already produced a negative margin $\\to$ an actual attack (so the earlier \u0026ldquo;robust\u0026rdquo; verdict was just a local minimum). Run #5 matched the MIP value $-1.96277$ (within a tight tolerance), showing that the NLP can reach the same minimum when started well. Takeaway: The \u0026ldquo;false positive\u0026rdquo; disappeared as soon as we diversified initializations. notice how MIP took more than an hour to find the attack, while NLP with multi-start found it in under 5 seconds. You still don’t get global guarantees from NLP, but with cheap diversity in starts, you dramatically reduce the chance of a misleading local minimum—while keeping runtimes tiny compared to a full MIP search.\nNote: The code shared at the end includes this multi-start strategy with this exact NN for you to try out.\nWhen to escalate to MIP? The NLP + Sobol multi-start approach is a powerful screening tool, but there are scenarios where you might still want to escalate to the exact MIP formulation for confirmation, such as:\nIf repeated NLP runs only return small positive margins (say $0 \u0026lt; \\text{margin} \u0026lt; 1$), or the decision is high-stakes (you need a certificate), switch the exact same model to MIP and let it prove (or disprove) robustness.\nIf the model is small enough that MIP solves quickly (under a minute), you might as well run it directly for a definitive answer.\nOtherwise, for screening at scale, NLP + Sobol multi-start is a sweet spot: fast, simple to automate, and per the results of this experiment, usually agrees with globally solved MIPs on the verdict.\nOther binary-free ReLU formulations you can try There isn’t a one-size-fits-all way to encode ReLU for verification. Beyond MIP (big-M + binaries) and the NLP complementarity track you’ve seen, GAMSPy exposes an MPEC route that keeps ReLU’s logic exact without binaries by using equilibrium/complementarity conditions handled by NLPEC solver (Which is also freely available for GAMSPy academic users).\nLicensing heads-up: Some users may encounter a license issue when using the NLPEC solver. If this happens, and you’re on an Academic license, please contact us via support@gams.com so we can help resolve access.\nMathematical view Let $x$ be the pre-activation and $y = \\max(0, x)$ the ReLU output. Define the slack\n$$ s := y - x. $$\nThen ReLU is equivalent to the linear relations plus a single complementarity pair:\n$$ y \\ge 0 ; \\quad s \\ge 0 ; \\quad s = y - x; \\qquad \\therefore \\quad y \\perp s \\quad \\text{i.e., } y\\cdot s = 0 $$\nThis formulation is an exact representation of ReLU because:\nIf $(x\\le 0):$ $(y\\ge x)$ and $(y\\ge0)$ force $(y=0)$, then $(s=-x\\ge0)$.\nIf $(x\\ge 0):$ $(y=x\\ge0)$ and thus $(s=0)$.\nHence $(\\displaystyle y=\\max(0,x))$.\nGeometrically, the ReLU graph is the union of two polyhedral cones in $(x, y)$: $K_{1} = { x \\le 0, y = 0 }$ and $K_{2} = { x \\ge 0, y = x }$.\nMIP imposes this disjunction via a binary $z \\in {0, 1}$ and big-M inequalities to select either $K_{1}$ or $K_{2}$. MPEC enforces mutual exclusivity without a binary: $y \\ge 0$, $s \\ge 0$, and $y \\perp s$ ensure at most one of ${y, s}$ is positive, which picks exactly one cone. No big-M and no branch-and-bound tree. In optimization terms, each ReLU is a tiny LPCC (linear program with complementarity constraints).\nStacking them yields a network that is an MPCC/MPEC rather than a MIP.\nUsing the equilibrium-based ReLU in GAMSPy GAMSPy provides a ready-made generator:\nFunction: gp.math.activation.relu_with_equilibrium Model type: problem=\u0026quot;MPEC\u0026quot; Solver: solver=\u0026quot;nlpec\u0026quot; Drop-in wiring that replaces the ReLU formulation in the earlier code is as follows:\n# 1) Swap ReLU layers to the equilibrium-based MPEC encoding def convert_relu_equilibrium(m: gp.Container, layer: torch.nn.ReLU): return gp.math.activation.relu_with_equilibrium change = {\u0026#34;ReLU\u0026#34;: convert_relu_equilibrium} seq_formulation = gp.formulations.TorchSequential(m, network, change) y, matches, _ = seq_formulation(a1) # 2) Objective: same right-vs-wrong margin as before obj = gp.Variable(m, name=\u0026#34;z\u0026#34;) margin = gp.Equation(m, \u0026#34;margin\u0026#34;) margin[...] = obj[...] == y[f\u0026#34;{right_label}\u0026#34;] - y[f\u0026#34;{wrong_label}\u0026#34;] # 3) Build an MPEC and solve it with NLPEC model = gp.Model( m, \u0026#34;min_noise\u0026#34;, equations=m.getEquations(), objective=obj, sense=\u0026#34;min\u0026#34;, problem=\u0026#34;MPEC\u0026#34;, # \u0026lt;-- binary-free MPCC/MPEC matches=matches, # \u0026lt;-- complementarity pairs ) model.solve(solver=\u0026#34;nlpec\u0026#34;) # \u0026lt;-- NLPEC handles the equilibrium smoothing Practical tips Keep inputs/activations well-scaled as we have done in the MNIST normalization. For robustness screening, combine NLPEC with multi-start initializations of the noise vector and stop at the first negative margin. Log NLPEC’s final stationarity/feasibility and the achieved margin for auditability. Want more MPEC variants? NLPEC supports several equilibrium/complementarity formulations (e.g., Fischer–Burmeister family and relatives). Some networks/budgets favor one variant over another. You can experiment and benchmark different formulations by swapping out the ReLU generator function in the code above with one of those alternatives you can build yourself. You can find the full list of available complementarity functions in the GAMS NLPEC documentation .\nTakeaways \u0026amp; next steps Along the way, we saw the classic MIP route (exact, certifiable, but slow and impossible to solve as networks grow) and a complementarity-based NLP alternative that delivers the same verdict on most cases in a tiny fraction of the time. That speed is a superpower which can be harnessed to run multiple diverse starts via Sobol sequences, boosting our confidence in the robustness verdict without the combinatorial overhead of MIPs. The output from NLP runs can either find real attacks or provide strong practical evidence of robustness when none emerge. The beauty is in the trade-off; we give up global optimality guarantees for speed, but gain the ability to scale robustness verification to larger models and datasets.\nGAMSPy makes it easy to implement both formulations, switch between them, and run experiments at scale with minimal code changes. With just a few lines, you can embed your PyTorch model, set up the adversarial attack optimization, and choose between MIP or NLP formulations, utilizing GAMS powerful solvers under the hood with Python\u0026rsquo;s simplicity and data-friendly ecosystem.\nIf you want to dive deeper or adapt the workflow, the GAMSPy docs’ step-by-step tutorial on embedding neural networks is a great place to start; then try this code here . on your network, log your runs, and see where the frontier of robustness really lies.\n","excerpt":"A hands-on comparison of ReLU modeling in GAMSPy, MIP vs NLP, for adversarial verification on realistic networks. The workflow, code snippets, and trade-off between speed and guarantees are all explained in details.","ref":"/blog/2025/11/speed-vs.-guarantees-a-practical-mipnlp-trade-off-for-nn-robustness/","title":"Speed vs. Guarantees: A Practical MIP–NLP Trade-off for NN Robustness"},{"body":"","excerpt":"","ref":"/authors/sdirkse/","title":"Steve Dirkse"},{"body":"with Josef Kallrath (University of Florida, Gainesville, FL)\nThis three-day course is designed to help those new to GAMS become more familiar with it and gain the knowledge to model and solve simple optimization problems. The participants will receive an introduction to mathematical optimization, including modeling and solution algorithms. Following the course, the participants will be able to map decision problems to the basic objects of optimization models: indices, data, variables, constraints, and objective functions.\nThe course is designed for participants with no prior knowledge of GAMS, though experience with other programming languages may be beneficial. It has many hands-on examples and exercises!\nAs a new addition to the course: we will be looking at how ChatGPT/CoPilot could be used for the generation or analysis of GAMS code.\nThe course offers ample opportunity for discussion and analysis of participants\u0026rsquo; own problems, in addition to the presentation, examples, hands-on activities and exercises.\nEarly-registration discount until Sep 27, 2025\nMore information and registration Contact E-Mail: JosefKallrath.SC@sci-con.de Homepage: https://josefkallrath.github.io ","excerpt":"with Josef Kallrath (University of Florida, Gainesville, FL). This three-day course is designed to help those new to GAMS become familiar with it and gain the knowledge to model and solve simple optimization problems.","ref":"/courses/2025_11_modeling-with-gamsbasic_kallrath/","title":"Modeling and Optimization with GAMS (basic)"},{"body":"","excerpt":"","ref":"/categories/gams/","title":"GAMS"},{"body":"","excerpt":"","ref":"/categories/solvers/","title":"Solvers"},{"body":"","excerpt":"","ref":"/authors/smann/","title":"Stefan Mann"},{"body":"","excerpt":"","ref":"/authors/svigerske/","title":"Stefan Vigerske"},{"body":"Mathematical optimization depends on solvers - yet using them effectively can be daunting. At GAMS, we turn solver complexity into solver power, providing both a unified modeling interface and deep, decades-long expertise in solver development.\nFrom Solver Complexity to Solver Independence Optimization solvers are powerful but often intimidating tools. When used directly through their APIs, they expose a wide range of configuration options, parameters, and performance settings that require deep technical understanding. Each solver has its own conventions, syntax, and option names, making it difficult to transfer knowledge from one solver to another. Understanding how these settings influence results takes significant expertise, and switching between solvers becomes a time-consuming and error-prone task. By contrast, using solvers through a modeling language like GAMS or GAMSPy abstracts away these differences, allowing users to focus on formulating their optimization problems rather than dealing with solver-specific details.\nThis abstraction has practical consequences: switching from one solver to another takes a single line of code, making experimentation with different algorithms such as simplex and barrier as well as various fine-tuning option settings straightforward in GAMS and GAMSPy.\nWhat it Means to Truly Understand Solvers At GAMS, solvers are not a black box. We understand them in depth because we have worked closely with all major commercial and open-source solver developers for decades. This collaboration goes beyond simple integration. We build and maintain the interfaces that connect GAMS to each solver, giving us hands-on insight into how they work and how to make them perform at their best.\nOur quality assurance process reinforces this expertise. Every night, we run thousands of tests with all supported combinations of solvers as part of our automated testing pipeline. This ensures that our interfaces remain robust, efficient, and consistent, especially as solvers evolve and new ones emerge.\nBehind this effort is a team of PhD-level researchers and engineers who specialize in numerical optimization. They are responsible for maintaining solver integrations and providing direct technical support to customers who encounter solver-specific challenges. When complex issues arise, our team can rely on established communication channels with solver developers to resolve them quickly, often accessing insights that are not publicly documented.\nSome of our staff members have their roots at leading centers for optimization research, such as the Zuse Institute Berlin and the University of Wisconsin-Madison. Combined with extensive experience across industries such as engineering, economics, energy, and finance, this background positions GAMS uniquely. Our customers benefit from a rare combination of deep theoretical knowledge and decades of applied expertise in using solvers effectively.\nBeyond Integration: GAMS\u0026rsquo; Role in Solver Innovation While GAMS integrates and supports nearly all major solvers equally, we also contribute to the solver ecosystem itself. Our involvement in both open source and commercial solver projects like CONOPT and PATH reflects our technical commitment to advancing solver technology, not a commercial preference.\nCommercial Solvers We Develop and Support CONOPT is a leading nonlinear programming (NLP) solver that became part of GAMS in 2024. By bringing CONOPT fully in-house, we ensured a smooth transition from its original developer, Arne Drud from ARKI Consulting, and secured its ongoing development at GAMS. CONOPT has long been a trusted solver in computable general equilibrium models, engineering applications, and process optimization. More recently, it has shown strong potential in emerging areas that combine machine learning with optimization. This is an area the GAMSPy team at GAMS is actively driving forward.\nPATH is a solver designed for mixed complementarity problems. Steve Dirkse, President of GAMS Development Corp, is one of PATH’s original developers. The solver has earned a reputation as one of the most reliable and efficient tools in the field of Computable General Equilibrium (CGE) modeling and related applications and Steve was awarded the Beale-Orchard-Hayes Prize in 1997 for his work on PATH, together with Michael C Ferris.\nOpen Source Solvers We Help Advance GAMS has been a long time supporter of the COIN-OR foundation, an open-source community for operations research software. Beginning in 2004 with a shared fascination for simple branch-and-bound solvers, GAMS has from early times on contributed to broaden the user base of COIN-OR solvers BONMIN, CBC, CLP, COUENNE, and IPOPT by making them easily available via the GAMS distribution, an effort that was acknowledged by awarding the COIN-OR Cup 2012 jointly to AIMMS, GAMS, and MPL. CLP (COIN-OR Linear Programming) and CBC (COIN-OR Branch-and-Cut) are linear and mixed-integer linear programming solvers, developed by operations research veteran John Forrest. IPOPT (Interior Point OPTimizer) on the other hand is a large-scale nonlinear programming solver that implements an interior-point line-search filter method, developed primarily by Andreas Wächter. On top of CBC and IPOPT build the MINLP solvers BONMIN and COUENNE , developed by Pierre Bonami, Pietro Belotti, and others. GAMS has made its links to these and other open-source solvers available within COIN-OR, serving now as guidelines for solver developers to connect their solvers to GAMS. GAMS has also contributed to newer arrivals in the COIN-OR ecosystem, in particular HiGHS (for LP/MIP) and SHOT (MINLP). For many years now, the support of GAMS in maintaining some of the COIN-OR solvers helped ensure users can keep leveraging their powerful capabilities.\nAs an alternative to the COIN-OR ecosystem, SCIP (Solving Constraint Integer Programs) is a versatile and powerful open-source solver for mixed-integer programming (MIP), mixed-integer nonlinear programming (MINLP), and constraint programming (CP) problems. The steady development of the SCIP Optimization Suite in a cooperation of, currently, eight academic institutions and several solvers vendors, including Cardinal, FICO, GAMS, and Gurobi, has led to an extremely flexible and feature-rich framework for various optimization algorithms, which includes some of the fastest non-commercial solvers available today. GAMS has contributed to the development of SCIP for more than 10 years and has recently increased this investment even further, being proud to now account for 3 SCIP developers in the GAMS development team. With PaPILO, SCIP, and SoPlex included in the GAMS distribution, these solvers are readily available for our users to serve as valuable tools to solve a broad spectrum of discrete and continuous optimization problems.\nThe new kid on the block - GPU-Powered PDHG An exciting and very recent addition to the solver landscape is the primal-dual hybrid gradient algorithm for LPs, especially an improved version called PDLP, which stems from research by Google\u0026rsquo;s OR-Tools team . This method is well-suited (by design) to run in parallel on modern GPUs, in contrast to established LP methods like simplex or interior point methods. PDHG has been integrated quickly into solvers like HiGHS, COPT, Gurobi, and KNITRO. Nvidia has also entered the fray and released cuOpt (also in COIN-OR), a GPU-accelerated implementation inspired by PDHG, which is tuned for the latest Nvidia GPU hardware. During a fruitful collaboration with Nvidia we were able to quickly integrate cuOpt into GAMS and make this available to anyone interested in this promising new technology. The hardware needed is still extremely expensive and will stop most potential users from exploring the benefits of the technology. However, once the required GPUs are offered by the big hyper-scalers, we will integrate this into our Engine-SaaS deployment solution and make this technology available to a much wider audience.\nThose examples highlight the unique position of GAMS in the optimization landscape - not just as a platform that integrates solvers, but as an active contributor to solver development and innovation, resulting in an inside perspective on how modern solvers work.\nHow Our Solver Expertise Translates into Customer Success Our deep understanding of solvers directly translates into value for our customers. When users face difficulties getting the most out of a solver, our experts can quickly pinpoint issues, explain solver behavior, and suggest effective parameter settings or model adjustments. This support often determines whether a model converges slowly or delivers reliable, fast results.\nFor example, in collaboration with Austrian Power Grid (APG), our consulting team improved solver stability and efficiency in a large scale energy optimization model, reducing total solve time by up to 70% and memory use by nearly 80%. In another case, working with TotalEnergies, we restructured a complex MINLP model for carbon storage, cutting solve times from hours to minutes and creating a user-friendly Python + Excel interface that brought advanced optimization directly to field engineers.\nWe also help customers make informed decisions when choosing between solvers. Because GAMS provides access to virtually all leading commercial and open-source solvers, we can evaluate options objectively. We have no commercial preference for one solver over another, which allows us to give independent recommendations based solely on technical merit, problem characteristics, and - where relevant - the associated licensing costs.\nThis unique combination of deep solver knowledge, broad solver coverage, and true independence makes GAMS a trusted partner for anyone aiming to get the most out of optimization technology.\nModeling Practices That Unlock Solver Performance Even the best solver can only perform as well as the model allows. Many optimization issues stem not from the solver itself, but from the way a problem is formulated. Poor scaling, unnecessary nonlinearities, or the absence of decomposition strategies can pose additional challenges for any solver and lead to slow or unreliable results.\nAt GAMS, we see model formulation and solver performance as two sides of the same coin. Our support and consulting teams work closely with customers to identify and resolve such modeling bottlenecks. Whether through decomposition techniques, reformulations that improve numerical conditioning, or guidance on variable scaling, we help users create models that solvers can handle efficiently and robustly.\nThis combination of solver knowledge and modeling expertise allows us to deliver practical, performance-oriented solutions — ensuring that our customers choose the right solver and use it under the best possible conditions.\nFrom Insight to Impact The optimization landscape is diverse and technically demanding, and success often depends on choosing and using the right solver effectively. This is where GAMS stands out. With decades of experience, deep technical expertise, and long-standing relationships with solver vendors, we combine the advantages of independence and insight.\nOur customers benefit from this unique position. They gain access not only to a wide range of solvers but also to the collective knowledge of a team that understands how these tools work at a fundamental level. Whether it is selecting the best solver for a specific model, fine-tuning performance, or troubleshooting complex behavior, GAMS provides guidance grounded in both theory and practice.\nIn short, GAMS transforms solver complexity into solver power—helping customers achieve better model performance, faster results, and deeper confidence in their optimization solutions. As solver technology evolves - from hybrid algorithms to GPU-accelerated methods - GAMS continues to integrate these advances into a consistent, solver-independent environment.\nInterested in a discussion? Contact us at support@gams.com !\n","excerpt":"Mathematical optimization depends on solvers - yet using them effectively can be daunting. At GAMS, we turn solver complexity into solver power, providing both a unified modeling interface and deep, decades-long expertise in solver development.","ref":"/blog/2025/11/turning-solver-complexity-into-solver-power-the-gams-advantage/","title":"Turning Solver Complexity into Solver Power: The GAMS Advantage"},{"body":"Training The CGEMOD team has 30 years experience in the provision of onsite and online courses in CGE modelling.\nFrom the autumn of 2025 ALL cgemod course will be open source. In recent years we have found that the majority of course participants do not need/use tutor support. Hence we no longer provide tutor support. All the course materials will be available from this site by following the links at the bottom of this page.\nThe Single and Global courses assume you have completed the Intro to Practical CGE Course. The Recursive Dynamic courses assume you have completed the Intro to Practical CGE Course and at least one of the Single or Global Course.\nAll the courses use GAMS with GAMS Studio as the editor, while all the models used in the courses the data are presented as Social Accounting Matrices. We strongly recommend ‘taking’ the ‘Social Accounting Matrices’ and ‘Introduction to GAMS/GAMS Studio’ BEFORE taking any of the CGE courses.\nNB: All the courses build on the ‘Introduction to Practical CGE Course’ using techniques covered in the ‘SAM Course’ and the ‘Intro to GAMS/GAMS Studio courses’. You will save time if you follow the recommended sequence.\nCourse Licences All cgemod courses are licensed by CC BY-NC-ND 4.0 (https://creativecommons.org/licenses/by-nc-nd/4.0/ )\nBY - you must give appropriate credit and provide a link to the license, NC — you may not use the material for commercial purposes . ND — If you remix, transform, or build upon the material, you may not distribute the modified material.\nGAMS License A Demo license is included with a GAMS distribution (see /sales/licensing/) . This is adequate for the Introduction to Practical Computable General Equilibrium (CGE) Modelling course. The more advanced courses need a license with more generous size restrictions. If you do not have a GAMS licence for the Base GAMS module and the PATH solver (the SAM Estimation course requires PATH and CONOPT solvers and, ideally, the KNITRO solver) you need to contact GAMS (e: sales@gams.com ). For licence options see /sales/licensing/ for details. After the course you should purchase a GAMS license.\nVisit the CGE Website for general information Open Access Courses: (In recommended sequence)\nSocial Accounting Matrices Intro to GAMS/GAMS Studio Intro to Practical CGE Course Practical Single Country CGE Practical Global CGE Recursive Dynamic CGE SAM Estimation course ","excerpt":"with cgemod","ref":"/courses/2025_11_cgemod_oa/","title":"CGE Modelling Courses (Open Access)"},{"body":"","excerpt":"","ref":"/categories/conference-report/","title":"Conference Report"},{"body":"GAMS Highlights GAMSPy at INFORMS Annual Meeting 2025 The INFORMS Annual Meeting 2025 in Atlanta was a productive and inspiring event for the GAMS team. Over five days, Steven Dirkse, Adam Christensen, Baudouin Brolet, and Maurice Jansen connected with academics, industry professionals, and software developers to showcase the latest innovations around GAMSPy.\nFocus on GAMSPy GAMSPy drew strong attention throughout the conference. Steve and Adam led an introductory workshop and two follow-up sessions that sparked lively discussions about modeling and migration from GAMS to GAMSPy. The positive response confirmed growing momentum around our Python-based modeling environment.\nBooth Activity and Academic Interest The GAMS booth quickly became a hub for professors, researchers, and students interested in GAMSPy. Many were excited to learn that it’s free for academic use, resulting in a strong wave of new sign-ups for our academic program.\nIndustry and Community Connections Alongside academic engagement, we connected with potential commercial partners, strengthening our pipeline of future collaborations. The exhibition also featured other major players in the field, with NVIDIA’s COIN-OR Cup win for cuOpt highlighting the community’s strong focus on high-performance computing.\nA Great Experience in Atlanta Beyond the professional success, the team enjoyed exploring Atlanta together, guided by Adam - from visits to Georgia Tech to their first hands-on experience with a Waymo driverless car.\nThe 2025 meeting reaffirmed GAMS’s active role in the optimization community and inspired new ideas and connections for the year ahead.\nSign up for our general information newsletter to stay up-to-date! Our Abstracts Pre-conference Workshop: An Introduction to Modeling with GAMSPy Presented by: Adam Christensen\nThis workshop offers a hands-on introduction to GAMSPy. GAMSPy combines the high-performance GAMS execution system with the flexible Python language, creating a powerful mathematical optimization package. It acts as a bridge between the expressive Python language and the robust GAMS system, allowing you to effortlessly create complex mathematical models and applications.\nJoin us to explore GAMSPy\u0026rsquo;s fundamental functionalities through practical, interactive exercises. We\u0026rsquo;ll cover everything from defining sets, parameters, variables, and equations to solving models and retrieving results, all within a familiar Python environment. Beyond the basics, we\u0026rsquo;ll also provide a glimpse into more advanced features, demonstrating how GAMSPy can streamline complex modeling workflows and enhance your analytical capabilities.\nWhether you\u0026rsquo;re a seasoned GAMS user looking to integrate with Python or a Python user curious about optimization, this workshop will equip you with essential skills needed to get started and demonstrate what is possible with GAMSPy.\nExhibitor Technology Showcases Mathematical programs with embedded surrogate models using GAMSPy Presented by: Adam Christensen\nRecent advances in ML/AI have commoditized the development of surrogate models using tools such as PyTorch, Scikit-Learn, and TensorFlow. These surrogate models simplify inherently non-linear phenomena, approximating complex behaviors so they can serve as constraints in optimization frameworks. Embedding these models in algebraic modeling languages (AMLs) like GAMS remains challenging: designed for sparse algebra, AMLs lack seamless integration with third-party software. The rise of Python in data science has motivated a paradigm shift, inspiring tools that bridge classical AMLs and current computational techniques.\nWe introduce GAMSPy, a native Python AML combining the mathematical transparency and scalability of traditional AMLs with Python’s ecosystem. Its set-driven constructs and operator overloading preserve the syntax of handwritten algebra while supporting dense matrix operations—matrix multiplication, transposition, norms—essential to ML/AI. While the GAMS “classic” engine excels at indexed algebra, GAMSPy extends its capabilities to accommodate ML workflows.\nWe demonstrate embedding a neural network trained in PyTorch to model an energy system as a constraint within an optimization problem, enabling system engineers to optimize plant operations with detailed energy conversion models. This workflow exemplifies applications spanning weather forecasting and market behavior modeling. We also compare GAMSPy to existing approaches, discuss future developments, and highlight innovative intersections of mathematical modeling and machine learning.\nGAMSPy represents a significant convergence of AML rigor and Python-driven ML versatility. Its design prioritizes computational efficiency, syntactic clarity, and scalability, offering a robust platform that overcomes integration hurdles and unlocks new possibilities at the intersection of optimization and data science.\nAn Introduction to Modeling with GAMSPy Presented by: Adam Christensen \u0026amp; Steven Dirkse\nOur showcase offers a hands-on introduction to GAMSPy. GAMSPy combines the high-performance GAMS execution system with the flexible Python language, creating a powerful mathematical optimization package. It acts as a bridge between the expressive Python language and the robust GAMS system, allowing you to effortlessly create complex mathematical models and applications.\nJoin us to explore GAMSPy\u0026rsquo;s fundamental functionalities through practical, interactive exercises. We\u0026rsquo;ll cover everything from defining sets, parameters, variables, and equations to solving models and retrieving results, all within a familiar Python environment. Beyond the basics, we\u0026rsquo;ll also provide a glimpse into more advanced features, demonstrating how GAMSPy can streamline complex modeling workflows and enhance your analytical capabilities.\nWhether you\u0026rsquo;re a seasoned GAMS user looking to integrate with Python or a Python user curious about optimization, this workshop will equip you with essential skills needed to get started and demonstrate what is possible with GAMSPy.\nCheck our presentation slides for more information:\nName: Size / byte: GAMSPy_INFORMS_WS.pdf 2042503 \u0026times; Previous Next Close ","excerpt":"We had a great week at the INFORMS Annual Meeting 2025 in Atlanta, with an active booth and well-attended presentations. It was great to see strong interest in GAMSPy and connect with so many users and researchers. Thanks to everyone who stopped by!","ref":"/blog/2025/10/informs-annual-meeting-in-atlanta/","title":"INFORMS Annual Meeting in Atlanta"},{"body":"","excerpt":"","ref":"/authors/mbuetzler/","title":"Marius Bützler"},{"body":"","excerpt":"","ref":"/categories/news/","title":"News"},{"body":"It\u0026rsquo;s been an exciting year for the GAMSPy community! Since the stable 1.0.0 release, we\u0026rsquo;ve been hard at work making our favorite optimization modeling package even more powerful, intuitive, and efficient. If you haven\u0026rsquo;t updated in a while, you\u0026rsquo;re in for a treat.\nThis post will walk you through the major enhancements and new features that will improve your workflow. Let\u0026rsquo;s dive in!\n1. A Sleek New CLI with Typer Your command-line experience just got a major upgrade. We\u0026rsquo;ve migrated GAMSPy CLI to Typer , a modern and robust framework to build CLI applications.\nWhat does this mean for you?\nBetter Help \u0026amp; Autocompletion: Get where you\u0026rsquo;re going faster with intelligent tab-completion and clearer help messages for all commands.\nA Modern Feel: The CLI is now more intuitive and user-friendly.\nGDX Dump and Diff Utilities: GAMSPy CLI has been extended with a new gdx api. You can dump the contents of gdx files with gamspy gdx dump \u0026lt;filename\u0026gt; and compare two gdx files with gamspy gdx diff \u0026lt;file1\u0026gt; \u0026lt;file2\u0026gt;.\nSimply run gamspy --help in your terminal to see the difference!\n2. Intuitive Access to Records Accessing the records of your expressions is now more direct. In order to get the results of an expression, you normally assign it to a symbol with a matching domain and print symbol.records. You can now use the .records property of expressions to get a clean DataFrame directly. GAMSPy will take care of the creation of the intermediate symbol.\nimport gamspy as gp m = gp.Container() i = gp.Set(m, records=[\u0026#39;i1\u0026#39;, \u0026#39;i2\u0026#39;]) a = gp.Parameter(m, domain=i, records=[(\u0026#39;i1\u0026#39;, 1), (\u0026#39;i2\u0026#39;, 2)]) b = gp.Parameter(m, domain=i, records=[(\u0026#39;i1\u0026#39;, 3), (\u0026#39;i2\u0026#39;, 4)]) # Old way c = gp.Parameter(m, domain=i) c[i] = a[i] + b[i] print(c.records) # New, simple way: print((a[i] + b[i]).records) This syntax simplifies debugging and data analysis, making your code cleaner and easier to read.\n3. Simplified Lag/Lead Syntax Writing dynamic models with time-series data is now much more natural. We\u0026rsquo;ve replaced .lead() and .lag() methods with simple arithmetic operators. This makes your equations look much closer to their mathematical representation.\nBefore:\ninventory_balance[t] = inventory[t] == inventory[t.lag(1)] + production[t] - sales[t] After:\ninventory_balance[t] = inventory[t] == inventory[t-1] + production[t] - sales[t] 4. Save and Load Your Work with Serialization Ever wanted to save the state of your model and come back to it later? Now you can! GAMSPy containers can be easily serialized (saved to a file) and deserialized (loaded from a file). This is perfect for checkpointing long-running processes, sharing your model setup, or simply pausing your work.\nimport gamspy as gp # Assuming \u0026#39;m\u0026#39; is your Container object # Save the entire model state to a file gp.serialize(m, \u0026#34;serialized.zip\u0026#34;) # Later, you can load it back m2 = gp.deserialize(\u0026#34;serialized.zip\u0026#34;) 5. Speed Up Data Loading with Bulk setRecords Loading data into your symbols just got a lot faster. Instead of calling setRecords individually for each symbol, you can now pass a dictionary of symbols and their records to a single setRecords call on the container. This significantly reduces overhead for data-heavy models.\nimport gamspy as gp m = gp.Container() i = gp.Set(m) j = gp.Set(m) # Pass all records in a single, efficient call m.setRecords({i: range(5), j: range(5, 10)}) # instead of doing separate calls for each symbol i.setRecords(range(5)) j.setRecords(range(5, 10)) 6. Effortless Summation with .sum() Here\u0026rsquo;s another great piece of syntactic sugar. Instead of wrapping a symbol in a Sum statement, you can now call the .sum() method directly on the symbol itself. It\u0026rsquo;s a small change that can make your code cleaner.\nBefore:\nimport gamspy as gp m = gp.Container() i = gp.Set(m) j = gp.Set(m) distances = gp.Parameter(m, domain=[i, j]) total = gp.Parameter(m) # The classic way total[...] = gp.Sum((i, j), distances[i, j]) # The new, concise way total[...] = distances.sum() The same syntactic sugar is also available for product , smin , smax , sand and sor operations.\n7. Configure GAMSPy with Package Options You now have more control over GAMSPy\u0026rsquo;s behavior through new package options . This allows you to set global configurations for your sessions, such as skipping validations for performance improvement.\nimport gamspy as gp gp.set_options({\u0026#34;SOLVER_OPTION_VALIDATION\u0026#34;: 0}) ... ... your_model_definition ... ... In the example above, we disable solver option validations that GAMSPy performs to make sure that the provided solver options are valid. After you make sure that your model behaves the way you planned, you can just disable the validations to gain extra performance.\n8. Automatic Name Inference With GAMSPy 1.17.0, the behavior of automatic name generation for symbols with no names has changed. For example, if you execute the following code snippet in GAMSPy 1.0.0:\nimport gamspy as gp with gp.Container(): i = gp.Set() print(f\u0026#34;Name of the set: {i.name}\u0026#34;) GAMSPy would generate an autoname that would look like the following:\nName of the set: s0f54b804_41a5_461a_a6b9_69bac2beb72c With the changes in 1.17.0, GAMSPy now tries to get the Python variable name from the frames in the stack and assign it to the symbol. So, the same code snippet above would now print:\nName of the set: i Beware that the frames might not always be available (e.g. in certain REPL sessions). In that case, GAMSPy will still generate a name automatically. This behavior can be controlled via USE_PY_VAR_NAME .\n9. Rename Symbols When Loading from GDX Importing data from GDX files is now more flexible. The symbol_names argument of container.loadRecordsFromGdx method now accepts a dictionary, allowing you to map GDX symbol names to different names within your GAMSPy model. This is incredibly useful for avoiding name collisions or aligning imported data with your existing naming conventions.\nimport gamspy as gp m = gp.Container() model_demand = gp.Parameter(m, \u0026#34;model_demand\u0026#34;) m.loadRecordsFromGdx( \u0026#34;data.gdx\u0026#34;, symbol_names={\u0026#34;gdx_demand\u0026#34;: \u0026#34;model_demand\u0026#34;} ) 10. Performance Boosts Who doesn\u0026rsquo;t love a speed-up? We\u0026rsquo;ve invested significant effort under the hood to make GAMSPy faster and more memory-efficient. Many models will be generated more quickly, letting you iterate on your work faster than ever before. For example, a simple benchmark on indus89.py model shows that there is 31% improvement in the model generation time between GAMSPy 1.0.0 and GAMSPy 1.17.0.\nhyperfine -w 3 -r 10 \u0026#39;python indus89.py\u0026#39; Benchmark 1 (GAMSPy 1.17.0): python indus89.py Time (mean ± σ): 1.514 s ± 0.029 s [User: 2.900 s, System: 0.157 s] Range (min … max): 1.484 s … 1.565 s 10 runs Benchmark 2 (GAMSPy 1.0.0): python indus89.py Time (mean ± σ): 2.197 s ± 0.067 s [User: 2.671 s, System: 0.097 s] Range (min … max): 2.078 s … 2.335 s 10 runs 11. Enhancements for Machine Learning Workflows We\u0026rsquo;re continuing to bridge the worlds of mathematical optimization and machine learning. Recent updates include:\nExpanded Model Support: We now support embedding a wider range of neural network blocks and constructs, including: Neural Networks: Conv1d , Conv2d , Linear (Dense) , flatten_dims , TorchSequential , and all pooling layers (MaxPool2d , MinPool2d , AvgPool2d ). Activation Functions: Support for LeakyReLU and other variants of the ReLU family. Decision Models: Integration of DecisionTree , RandomForest , and GradientBoosting models. Enhanced Performance: Improved stability for matrix multiplication, particularly in edge cases. Better bound propagation for a variety of blocks, including Linear, flatten_dims, and all pooling and convolutional layers. New Capabilities: Support for piecewise linear functions. The ability to use singleton sets as domains. New examples demonstrating the embedding of feedforward and convolutional neural networks. import gamspy as gp import torch.nn as nn from gamspy.math import dim m = gp.Container() model = nn.Sequential( nn.Conv2d(1, 20, 5), nn.ReLU(), nn.Conv2d(20, 64, 5), nn.ReLU() ) # load or train your Sequential model # ... m = gp.Container() x = gp.Variable(m, domain=dim([1, 1, 32, 32])) seq_formulation = gp.formulations.TorchSequential(m, model) y, eqs = seq_formulation(x) # apply Sequential model to x and get y print(y.domain) # 1, 64, 24, 24 -\u0026gt; batch, channels, height, width Conclusion The latest version of GAMSPy is all about making you more productive. With a slicker CLI, more Pythonic syntax, major performance gains, and powerful new features like serialization, there\u0026rsquo;s never been a better time to build your optimization models.\nWe encourage you to upgrade to the latest version (pip install gamspy --upgrade) and give these new features a try. Check out our documentation for more details and let us know what you think!\nHappy modeling!\n","excerpt":"It\u0026rsquo;s been an exciting year for the GAMSPy community! Since the stable 1.0.0 release, we\u0026rsquo;ve been hard at work making our favorite optimization modeling package even more powerful, intuitive, and efficient.","ref":"/blog/2025/10/whats-new-in-gamspy-a-look-at-the-awesome-features-since-version-1.0.0/","title":"What's New in GAMSPy? A Look at the Awesome Features Since Version 1.0.0!"},{"body":"The annual conference of the Society for Operations Research (GOR e.V.) took place in Bielefeld from September 2–5, 2025, hosted by Bielefeld University. This year’s theme, “Operations Research in a Complex World,” brought together participants from academia and industry to discuss ongoing developments in the field.\nThe GAMS team was present with several contributions and enjoyed connecting with colleagues and users. While the program offered a wide range of topics, for us the highlight was the opportunity to present and discuss our own work with the OR community.\nWe are happy to share our abstracts and presentation slides once again here, so that those who could not attend—or would like a second look—can dive deeper into our contributions.\nOur thanks go to the organizers, speakers, and participants for their efforts in putting together this year’s GOR meeting. We look forward to continuing the exchange of ideas and to next year’s conference.\nThe abstracts:\nPre-Conference Workshop An Introduction to Modelling with GAMSPy Workshop Organizers: Frederik Fiand \u0026amp; Lutz Westermann This 90-minute workshop offers a hands-on introduction to GAMSPy. GAMSPy combines the high-performance GAMS execution system with the flexible Python language, creating a powerful mathematical optimization package. It acts as a bridge between the expressive Python language and the robust GAMS system, allowing you to create complex mathematical models effortlessly.\nJoin us to explore GAMSPy\u0026rsquo;s fundamental functionalities through practical, interactive exercises. We\u0026rsquo;ll cover everything from defining sets, parameters, variables, and equations to solving models and retrieving results, all within a familiar Python environment. Beyond the basics, we\u0026rsquo;ll also provide a glimpse into more advanced features, demonstrating how GAMSPy can streamline complex modeling workflows and enhance your analytical capabilities.\nWhether you\u0026rsquo;re a seasoned GAMS user looking to integrate with Python or a Python user curious about optimization, this workshop will equip you with the essential skills to get started with GAMSPy.\nOur GAMS presentations: Embedding Neural Networks into Optimization Models with GAMSPy Authors: Frederik Fiand, Michael, Bussieck, Hamdi Burak Usul\nGAMSPy is a powerful mathematical optimization package which integrates Python\u0026rsquo;s flexibility with GAMS\u0026rsquo;s modeling performance. Python features many widely used packages to specify, train, and use machine learning (ML) models like neural networks. GAMSPy bridges the gap between ML and conventional mathematical modeling by providing helper classes for many commonly used neural network layer formulations and activation functions. These allow a compact description of the network architecture that gets automatically reformulated into model expressions for the GAMSPy model.\nIn this talk, we demonstrate how GAMSPy can seamlessly embed a pretrained neural network into an optimization model. We also explore the utility of GAMSPy\u0026rsquo;s automated reformulations for neural networks in various applications, such as adversarial input generation, model verification, customized training, and leveraging predictive capabilities within optimization models.\nA Whole New Look for CONOPT Authors: Lutz Westermann, Michael Bussieck\nFollowing GAMS\u0026rsquo; recent acquisition of CONOPT from ARKI Consulting \u0026amp; Development A/S, this presentation delves into the continuous evolution of this robust nonlinear optimization solver, emphasizing the advancements introduced in the latest release and the strategic implications of the new ownership.\nThe latest iteration of CONOPT introduces new APIs, e.g, for C++ and Python, opening up new possibilities for a clean, efficient, and robust integration into various software environments and projects requiring nonlinear optimization.\nFinally, we will demonstrate the practical application of providing derivatives to CONOPT, an important step that is often necessary to achieve the best possible performance.\nCheck our presentation slides for more information:\nName: Size / byte: A Whole New Look for CONOPT.pdf 1174945 Embedding neural networks into optimization models with GAMSPy.pdf 1904807 GAMSPy Workshop OR2025.pdf 3592341 ","excerpt":"The GOR Annual Meeting 2025 took place in Bielefeld from September 2–5, hosted by Bielefeld University under the theme Operations Research in a Complex World","ref":"/blog/2025/09/gor-2025-in-bielefeld/","title":"GOR 2025 in Bielefeld"},{"body":" It’s been one year, and what started as an idea to bring GAMS technology natively into Python has become a trusted tool for the optimization community. GAMSPy, our Python-native interface to the GAMS modeling system, has grown from a concept to a globally used tool.\nWith thousands of academic users, adoption at leading universities, and growing use in professional workflows, this first anniversary highlights our technical progress and community engagement. Looking ahead, the next year will focus on expanding functionality, strengthening the ecosystem, and supporting the growing number of researchers and practitioners who rely on GAMSPy.\nThe idea is simple, you write models in idiomatic Python, and in the background the GAMS execution system handles deterministic model generation and solves them with our portfolio of free and commercial solvers. We built it to meet modelers where they work - Python - so teaching, prototyping, and production use the same environment with no rewrites.\nFor academia this was a step change: for the first time, academic users can access the full power of GAMS model generation and commercial-grade solvers with free academic licenses - enabling larger student assignments and research projects.\nOne year in, GAMSPy has been adopted by thousands of academic users at leading universities. Initially, most adoption came from individual users obtaining licenses, but we now see clear evidence of increased classroom use. In its inaugural year, we have distributed roughly 7,500 academic GAMSPy licenses in 95 countries, with adoption at 79 of the world’s top 100 universities –clear evidence of its value in teaching and research.\nFor commercial teams, GAMSPy shortens time-to-value by keeping models in Python while preserving solver independence and performance, making it easier to plug optimization into existing data and CI/CD pipelines. On top, the integration with GAMS MIRO and GAMS Engine allows creating user interfaces for analysts and highly scalable deployments.\nOf course, we do not yet have commercial usage numbers comparable to academia, but the appeal of GAMSPy is clear: it accelerates the path from prototype to production by keeping optimization models within Python while maintaining solver independence. Python acts as a high-level abstraction layer, delegating heavy computational tasks to the GAMS backend. Benchmarks show minimal overhead - about 27% on Linux and 8% on Windows - figures negligible in real-world cases where solver runtime dominates execution.\nMachine Learning In its first year, GAMSPy has gradually introduced more and more constructs familiar from machine learning, such as linear layers, pooling operators (max, min, average), and activation functions (e.g., Leaky ReLU). While GAMSPy is not a deep learning framework, these features provide a compact way to express neural-network-inspired structures within optimization models. This opens new avenues at the intersection of optimization and machine learning, supporting research in adversarial training, hyperparameter tuning, robust learning, and hybrid models that integrate neural components with mathematical programming.\nFor academic users, having ML-style operators in GAMSPy means they can formulate hybrid models that combine discrete/continuous optimization with neural-network layers, while still solving them with the full power of commercial optimization solvers - and with no license cost in teaching and research. For commercial teams, these features open the door to decision-focused learning or model compression problems where optimization and ML intersect. In short, the ML functions extend GAMSPy beyond “classic” mathematical programming into the fast-growing space of optimization-aware ML and ML-augmented optimization.\nCommunity and resources Alongside the software, we\u0026rsquo;ve built a place where users can connect: the GAMS Forum . When questions or problems arise, GAMSPy users don’t have to work in isolation –they can post directly to the forum and get input from both the GAMS team and other experienced community members. Because the discussions are public and searchable, the forum quickly becomes a knowledge base of practical solutions, tips, and workarounds.\nFor academic users, this means faster answers when teaching or research deadlines are tight. For commercial users, it provides qualified second opinions and guidance on advanced modeling or integration issues. The GAMS Forum ensures that GAMSPy is more than just a resource —it is a tool backed by an active, knowledgeable community.\nBeyond our forum, GAMSPy is supported by excellent documentation at https://gamspy.readthedocs.io , which offers detailed guides and API references. Plus, we also provide a great selection of practical examples on GitHub and will be developing more online teaching resources with trusted partners to integrate GAMSPy into academic and professional training.\nFinal thoughts The rapid growth and adoption of GAMSPy in its first year underscore its significant impact on the mathematical optimization community. We’ve seen students and researchers develop novel and creative optimization pipelines fully integrated into Python, such as those highlighted in our first GAMSPy student competition . As we look to the future, we remain committed to enhancing GAMPy capabilities and fostering a vibrant ecosystem, empowering an ever-wider range of users to leverage the power of GAMS within Python for their research and professional endeavors. At the same time, we continue to extend and improve GAMS itself - GAMS is the workhorse underpinning GAMSPy\u0026rsquo;s performance, and will continue to have its place for our users.\n","excerpt":"It’s been one year, and what started as an idea to bring GAMS technology natively into Python has become a trusted tool for the optimization community.","ref":"/blog/2025/09/gamspy-at-one-advancing-optimization-in-python/","title":"GAMSPy at One: Advancing Optimization in Python"},{"body":"","excerpt":"","ref":"/authors/ffiand/","title":"Fred Fiand"},{"body":"Update: The article has been extended with recently added cuOpt options from release 25.10 and to reflect the GAMS/cuOpt solver link is now available for both CUDA 12 and CUDA 13.\nFor a long time, GPUs were essential for various AI applications and other high-performance compute applications but had limited impact on mathematical optimization. That’s changing.\nThe paper \u0026ldquo;Practical Large-Scale Linear Programming using Primal-Dual Hybrid Gradient \u0026rdquo; by Applegate et al. (2021) ignited considerable interest with its primal-dual linear programming (PDLP) implementation, proving the method\u0026rsquo;s effectiveness on real-world problems. This led many solvers to incorporate GPU-accelerated PDLP implementations into their algorithmic offerings. Public interest has further intensified with the open-sourcing of NVIDIA\u0026rsquo;s own GPU-accelerated LP/MIP and VRP solver, cuOpt .\nAt GAMS we’re deeply committed to improving computational efficiency for large-scale linear programming, such as those that occur in energy systems modeling. This commitment is demonstrated through projects like PEREGRINE , which develops and utilizes innovative solver technology such as PIPS-IPM++ , a parallel, high-performance computing (HPC) capable interior point method tailored for block-structured LPs, which often arise naturally in large-scale problems that have repetitive substructures. Our ongoing efforts also include exploring cutting-edge technologies like GPU-accelerated optimization with PDLP, to effectively address the challenges of modern optimization.\nWe are therefore happy to announce the GAMS/cuOpt Link , developed in collaboration with NVIDIA. This offering allows you to solve GAMS and GAMSPy models using cuOpt and further expands our collection of GPU-accelerated solvers, which already features COPT and HiGHS (both utilizing a GPU-accelerated implementation of PDLP, COPT also has a GPU-accelerated Barrier).\nThis blog post focuses on the practical application of GAMS/cuOpt rather than extensive benchmarking. It aims to address:\nWhat model types does cuOpt support What hardware is needed How can cuOpt be utilized through GAMS and GAMSPy What model types does cuOpt support? NVIDIA cuOpt offers a flexible set of solver methods and features for optimizing Linear Programming (LP), Mixed-Integer Linear Programming (MILP), and Vehicle Routing Problems (VRP)1. This versatility allows users to tailor its operation to specific problem characteristics and desired solution qualities.\ncuOpt offers four LP methods. The default concurrent method runs PDLP (GPU), Barrier (GPU), and Dual Simplex (CPU) simultaneously, returning the faster solution. The GPU-based PDLP method is designed for large-scale memory-intensive LPs and can work on model instances that often run out of memory with the Barrier method. Dual Simplex is a classic CPU-based method suited for small to medium-sized LPs, particularly when a high-quality basic solution is required.\nFor MIP, NVIDIA cuOpt uses a hybrid GPU/CPU method, running primal heuristics on the GPU and improving the dual bound on the CPU.\nWhat hardware is needed? cuOpt requires an NVIDIA GPU with a Volta architecture or newer. Powerful GPUs such as the NVIDIA V100, A100, and H100/H200 (as seen in benchmarks ) can be deployed on-premise but the seamless integration of GAMS/cuOpt into GAMS Engine SaaS makes these capabilities more accessible to those without GPU-accelerated infrastructure.This cloud-based solution provides users with access to the precise hardware needed for their optimization problems, eliminating the need for substantial upfront investment.\nHow can cuOpt be utilized through GAMS and GAMSPy? GAMS/cuOpt can be set up to run locally or it can be run on the cloud via GAMS Engine SaaS.\nInstalling and using cuOpt locally with GAMS and GAMSPy Before you begin, ensure your system meets the following system requirements:\nOperating System: Linux GAMS: Version 49 or newer. GAMSPy: Version 1.12.1 or newer NVIDIA GPU: Volta architecture or better CUDA Runtime Libraries: 12.8+ or 13 Installation Steps Download and unpack cuopt-link-release-cu12.zip or cuopt-link-release-cu13.zip from the latest GitHub release : Unpack the contents of cuopt-link-release-cu*.zip into your GAMS system directory2. Caution: This will overwrite any existing gamsconfig.yaml file in that directory. The provided gamsconfig.yaml includes a solverConfig section that enables cuOpt for use with GAMS.\nDownload and unpack cu12-runtime.zip or cu13-runtime.zip from the latest GitHub release (if needed): If your machine is missing the CUDA runtime libraries, unpack cu*_runtime.zip into the same GAMS system directory.\nHint: The GAMSPy/cuOpt example notebooks automate these steps and allow you to get started right away.\nRunning GAMS/cuOpt and GAMSPy/cuOpt locally After installation, you can run a GAMS or GAMSPY model with cuOpt.\nGAMS To load and solve the trnsport model from the GAMS model library, execute:\ngamslib trnsport gams trnsport lp=cuopt GAMSPy To use cuOpt with GAMSPy (e.g. with the transport.py example), set cuOpt as solver in the solve function and set option gamspy.set_options({\u0026quot;SOLVER_VALIDATION\u0026quot;: 0}).\n[...] transport.solve(solver=\u0026#34;cuopt\u0026#34;) Running cuOpt in the cloud via GAMS Engine SaaS An alternative to a local installation is to use cuOpt via GAMS Engine SaaS. This cloud-based platform offers a centralized solution for submitting GAMS jobs. It supports various client applications such as GAMS Studio , GAMS MIRO , the GAMS Engine Web UI , and custom scripts in Python and other languages. This flexibility allows users to integrate cuOpt into existing GAMS and GAMSPy workflows and lowers the barrier to entry by avoiding the need for significant on-premise GPU investments. Users gain access to necessary hardware configurations without substantial upfront capital expenditure, making GPU-accelerated optimization more accessible and scalable.\nNot yet a GAMS/Engine SaaS user? Request your free test account at sales@gams.com and let the team know you want to try GAMS/cuOpt.\nRunning GAMS/cuOpt on Engine SaaS through GAMS Studio The process of running GAMS/cuOpt on Engine SaaS through GAMS Studio offers a user experience akin to executing GAMS jobs locally within Studio.\nHint: Are you a Windows or Mac user working with GAMS Studio? No Problem. While cuOpt is only available for Linux, you can submit your GAMS/cuOpt jobs from GAMS Studio under Windows and Mac as well.3\nSelect \u0026ldquo;Run GAMS Engine\u0026rdquo; Remember to configure cuOpt as your LP solver, either through the command-line parameter (as demonstrated below) or by including an appropriate option statement within your GAMS code. Should you not already be logged in, the GAMS Engine Login Dialog will appear. Please choose your preferred sign-in method to proceed with logging in. The Submit Job dialog box will appear. Select the desired Namespace and instance for submission. Optionally, assign a tag to the job before clicking OK. Your job compiles locally and is then submitted to Engine SaaS for execution. You can monitor its progress in the Process Log. Upon completion, the results and all output files are downloaded and become accessible on your local system. --- Job indus89.gms Start 06/23/25 22:07:03 50.1.0 12b75dde WEX-WEI x86 64bit/MS Windows [...] --- Starting compilation --- indus89.gms(3485) 4 Mb *** Status: Normal completion --- Job indus89.gms Stop 06/23/25 22:07:03 elapsed 0:00:00.038 adding: indus89.gms (164 bytes security) (stored 0%) adding: indus89.g00 (164 bytes security) (deflated 5%) --- GAMS Engine at https://engine.gams.com:443/api --- switch LOG to indus89-server.lst TOKEN: b88ca408-f993-42ff-afe6-972c2ac1fae0 --- Job queued (80 sec) --- Job indus89.gms Start 06/23/25 20:10:18 49.6.1 55d34574 LEX-LEG x86 64bit/Linux --- Applying: /home/jail/opt/gams/gmsprmun.txt /home/jail/opt/gams/gamsconfig.yaml --- GAMS Parameters defined LP cuopt Restart /home/gfreeman/indus89.g0? Input /home/gfreeman/indus89.gms [...] --- Reset Solvelink = 1 --- 2,726 rows 6,570 columns 39,489 non-zeroes --- Range statistics (absolute non-zero finite values) --- RHS [min, max] : [ 8.438E-04, 2.170E+04] - Zero values observed as well --- Bound [min, max] : [ 5.000E-04, 2.713E+03] - Zero values observed as well --- Matrix [min, max] : [ 4.104E-04, 1.000E+06] --- Executing CUOPT (Solvelink=1): elapsed 0:00:00.117 GAMS/cuOpt link was built against cuOpt version: 25.10.00, git hash: 99e549ce0d4c67f1383187d719f7fd5a7fed33de Setting parameter log_file to /home/gfreeman/225a/cuopt.dat cuOpt version: 25.10.0, git hash: 99e549c, host arch: x86_64, device archs: 70-real,75-real,80-real,86-real,90a-real,100f-real,120a-real,120 CPU: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz, threads (physical/logical): 13/26, RAM: 227.96 GiB CUDA 12.9, device: NVIDIA H100 80GB HBM3 (ID 0), VRAM: 79.11 GiB CUDA device UUID: 3c527f11-4f7e-0effffff95-5758-ffffff Solving a problem with 2725 constraints, 6569 variables (0 integers), and 36535 nonzeros Problem scaling: Objective coefficents range: [2e-01, 1e+06] Constraint matrix coefficients range: [4e-04, 1e+04] Constraint rhs / bounds range: [0e+00, 2e+04] Variable bounds range: [5e-04, 3e+03] Warning: input problem contains a large range of coefficients: consider reformulating to avoid numerical difficulties. Third-party presolve is disabled, skipping Objective offset -0.000000 scaling_factor -1.000000 Running concurrent Dual simplex finished in 0.72 seconds, total time 3.62 Barrier finished in 3.95 seconds Iter Primal Obj. Dual Obj. Gap Primal Res. Dual Res. Time 0 -0.00000000e+00 -0.00000000e+00 0.00e+00 4.05e+04 2.50e+07 5.511s PDLP finished Concurrent time: 2.609s, total time 5.513s Solved with dual simplex Status: Optimal Objective: 1.14873646e+05 Iterations: 6608 Time: 5.513s --- Reading solution for model wsisn[LS2:2523] --- Executing after solve: elapsed 0:01:00.859[LS2:12126] --- GDX File C:\\Users\\ffian\\Documents\\GAMS\\Studio\\workspace\\indus89.gdx *** Status: Normal completion[LS2:12142] --- Job indus89.gms Stop 06/23/25 20:11:19 elapsed 0:01:00.869 Archive: solver-output.zip inflating: indus89.lst inflating: indus89.lxi --- extracting: .\\indus89-temp\\indus89.gms inflating: solver.log inflating: indus89.gdx inflating: indus89.g00 *** Local file updated: indus89.gdx *** Local file updated: indus89-server.lst *** Local file updated: indus89-server.lxi *** Local file updated: indus89-solver.log Running GAMSPy/cuOpt on Engine SaaS In order to submit your GAMSPy model to Engine SaaS for solving, you need to define the GAMS Engine configuration by importing EngineClient and creating an instance, which can then be passed to the solve method with the backend specified as \u0026rsquo;engine'.\nTo adapt the GAMSPy model library\u0026rsquo;s mexss model for use with cuOpt under Engine, some modifications are necessary.\n[...] from gamspy import ( Container, Equation, [...], EngineClient ) [...] #create EngineClient instance. Submit to namespace gpu_tests and select an instance with NVIDIA GPU client = EngineClient( host=\u0026#34;https://engine.gams.com/api\u0026#34;, username=os.environ[\u0026#34;ENGINE_USER\u0026#34;], password=os.environ[\u0026#34;ENGINE_PASSWORD\u0026#34;], namespace=\u0026#34;gpu_tests\u0026#34;, engine_options={\u0026#34;labels\u0026#34;: \u0026#34;instance=g6.4xlarge\u0026#34;} ) #solve with solver cuopt and engine backend mexss.solve(solver=\u0026#34;cuopt\u0026#34;, backend=\u0026#34;engine\u0026#34;, client=client, output=sys.stdout) [...] The Terminal output will look as follows:\n(gamspy) PS C:\\Users\\ffian\u0026gt; \u0026amp; C:/Users/ffian/.conda/envs/gamspy/python.exe c:/Users/ffian/Documents/projects/gamspy-examples/models/mexss/mexss.py [ENGINE - INFO] Job status is queued... [ENGINE - INFO] Job status is queued... [ENGINE - INFO] Job status is queued... --- Job __AcZmhoFQyKC4_Jc_7zWpQ.gms Start 06/30/25 11:36:04 49.6.1 55d34574 LEX-LEG x86 64bit/Linux --- Applying: /home/jail/opt/gams/gmsprmun.txt /home/jail/opt/gams/gamsconfig.yaml --- GAMS Parameters defined LP cuopt [...] --- Generating LP model mexss --- __AcZmhoFQyKC4_Jc_7zWpQ.gms(459) 4 Mb --- Reset Solvelink = 1 --- 74 rows 78 columns 230 non-zeroes --- Range statistics (absolute non-zero finite values) --- RHS [min, max] : [ 5.600E-01, 4.011E+00] - Zero values observed as well --- Bound [min, max] : [ NA, NA] - Zero values observed as well --- Matrix [min, max] : [ 1.200E-01, 1.500E+02] --- Executing CUOPT (Solvelink=1): elapsed 0:00:00.002 GAMS/cuOpt link was built against cuOpt version: 25.10.00, git hash: 99e549ce0d4c67f1383187d719f7fd5a7fed33de Setting parameter log_file to /home/gfreeman/225a/cuopt.dat cuOpt version: 25.10.0, git hash: 99e549c, host arch: x86_64, device archs: 70-real,75-real,80-real,86-real,90a-real,100f-real,120a-real,120 CPU: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz, threads (physical/logical): 13/26, RAM: 228.04 GiB CUDA 12.9, device: NVIDIA H100 80GB HBM3 (ID 0), VRAM: 79.11 GiB CUDA device UUID: 3c527f11-4f7e-0effffff95-5758-ffffff Solving a problem with 73 constraints, 77 variables (0 integers), and 225 nonzeros Problem scaling: Objective coefficents range: [1e+00, 1e+00] Constraint matrix coefficients range: [1e-01, 2e+02] Constraint rhs / bounds range: [0e+00, 4e+00] Variable bounds range: [0e+00, 0e+00] Third-party presolve is disabled, skipping Objective offset 0.000000 scaling_factor 1.000000 Running concurrent Dual simplex finished in 0.09 seconds, total time 2.81 Barrier finished in 2.95 seconds Iter Primal Obj. Dual Obj. Gap Primal Res. Dual Res. Time 0 +0.00000000e+00 +0.00000000e+00 0.00e+00 4.70e+00 2.00e+00 4.621s PDLP finished Concurrent time: 1.901s, total time 4.623s Solved with dual simplex Status: Optimal Objective: 5.38811204e+02 Iterations: 56 Time: 4.623s --- Reading solution for model mexss --- Executing after solve: elapsed 0:00:01.397 --- __AcZmhoFQyKC4_Jc_7zWpQ.gms(518) 4 Mb --- GDX File /home/gfreeman/__AcZmhoFQyKC4_Jc_7zWpQout.gdx *** Status: Normal completion --- Job __AcZmhoFQyKC4_Jc_7zWpQ.gms Stop 06/30/25 11:36:06 elapsed 0:00:01.398 [ENGINE - INFO] Results have been extracted to your working directory: C:\\Users\\ffian\\AppData\\Local\\Temp\\tmpizp77f7a. Key considerations when working with GAMS/cuOpt: cuOpt offers the prospect of considerable speedup, particularly for large-scale linear programming problems. However, discussions have arisen regarding the accuracy of solutions obtained by leveraging first-order methods like PDLP. In a recent webinar , Gurobi Co-Founder and Chairman Ed Rothberg noted that while PDLP can offer faster solve times, it may struggle to deliver high-precision solutions when compared to interior point methods. Julian Hall expanded on this in the HiGHS Newsletter 25.0 , where he presented experimental comparisons between cuPDLP-C and the HiGHS interior point solver. His findings showed that although PDLP was faster on several instances, some solutions exhibited notable deviations from true optimality.\nThese observations underscore a critical trade-off: while first-order methods can unlock substantial speedups, especially for very large models, users must weigh this against the level of accuracy required for their use case. Understanding these trade-offs and using solvers and algorithms thoughtfully is essential for making informed decisions when selecting a solver. Carefully comparing GAMS/cuOpt with solvers you\u0026rsquo;re already using can be a smart move. In the following section, we\u0026rsquo;ll briefly describe which GAMS model types cuOpt can solve, explore the solver options GAMS/cuOpt offers, and demonstrate how the GAMS tool Examiner can be effectively used to draw conclusions about solution quality.\nSupported problem types and limitations cuOpt can solve the following GAMS problem types:\nLP (Linear Programming) MIP (Mixed Integer Programming) RMIP (Relaxed Mixed Integer Programming) Note on Discrete Variables: The type of discrete variables is limited to binaries and integers.\ncuOpt parameters GAMS/cuOpt options can be set through a GAMS solver options file . The various GAMS/cuOpt options are listed here by category, with a few words about each to indicate its function.\nFor a more detailed documentation of the available parameters, consult the official NVIDIA cuOpt user guide .4\nGeneral Options:\nOption Description Type Default num_cpu_threads Controls the number of CPU threads used in the LP and MIP solvers (default GAMS Threads) integer 0 presolve Controls whether presolve is enabled. Presolve can reduce problem size and improve solve time. Enabled by default for MIP, disabled by default for LP boolean 0 (LP) 1 (MIP) dual_postsolve Controls whether dual postsolve is enabled. Only relevant for LPs boolean 1 (LP w/ presolve) 0 prob_read Reads a problem from an MPS file. When an instance is supplied via an MPS file no solution is reported back to GAMS string time_limit Controls the time limit in seconds after which the solver will stop and return the current solution (default GAMS ResLim) integer 0 Linear Programming Options:\nOption Description Type Default absolute_dual_tolerance Controls the absolute dual tolerance used in PDLP\u0026rsquo;s dual feasibility check double 0.0001 absolute_gap_tolerance Controls the absolute gap tolerance used in PDLP\u0026rsquo;s duality gap check double 0.0001 absolute_primal_tolerance Controls the absolute primal tolerance used in the primal feasibility check double 0.0001 crossover Controls whether PDLP should crossover to a basic solution after an optimal solution is found boolean 0 first_primal_feasible Controls whether PDLP should stop when the first primal feasible solution is found boolean 0 infeasibility_detection Controls whether PDLP should detect infeasibility boolean 0 iteration_limit Controls the iteration limit after which the solver will stop and return the current solution (default GAMS IterLim) integer maxint method Controls the method to solve the linear programming problem0: concurrent1: PDLP2: dual simplex3: barrier enumint 0 pdlp_solver_mode Controls the mode under which PDLP should operate0: stable11: stable22: methodical13: fast14: stable 3 enumint 4 per_constraint_residual Controls whether PDLP should compute the primal \u0026amp; dual residual per constraint instead of globally boolean 0 relative_dual_tolerance Controls the relative dual tolerance used in PDLP\u0026rsquo;s dual feasibility check double 0.0001 relative_gap_tolerance Controls the relative gap tolerance used in PDLP\u0026rsquo;s duality gap check double 0.0001 relative_primal_tolerance Controls the relative primal tolerance used in PDLP\u0026rsquo;s primal feasibility check double 0.0001 save_best_primal_so_far Controls whether PDLP should save the best primal solution so far boolean 0 strict_infeasibility Controls the strict infeasibility mode in PDLP boolean 0 Barrier Solver Options:\nOption Description Type Default folding Controls whether to fold the linear program-1: cuOpt decides0: disable folding1: force folding enumint -1 dualize Controls whether to dualize the linear program in presolve-1: cuOpt decides0: disable dualization1: force dualization enumint -1 ordering Controls the ordering algorithm used by cuDSS for sparse factorizations-1: cuOpt decides0: cuDSS default ordering1: AMD (Approximate Minimum Degree) ordering enumint -1 augmented Controls which linear system to solve in the barrier method.-1: cuOpt decides0: solve the ADAT system (normal equations)1: solve augmented system enumint -1 eliminate_dense_columns Controls whether to eliminate dense columns from the constraint matrix before solving boolean true cudss_deterministic Controls whether cuDSS operates in deterministic mode boolean false barrier_dual_initial_point controls the method used to compute the dual initial point for the barrier solver-1: cuOpt decides0: Use an initial point from a heuristic approach based on the paper “On Implementing Mehrotra’s Predictor–Corrector Interior-Point Method for Linear Programming” (SIAM J. Optimization, 1992) by Lustig, Martsten, Shanno1: Use an initial point from solving a least squares problem that minimizes the norms of the dual variables and reduced costs while statisfying the dual equality constraints enumint -1 Mixed Integer Linear Programming Options:\nOption Description Type Default mip_absolute_gap Controls the absolute tolerance used to terminate the MIP solve (default GAMS OptCA) double 1.00E-10 mip_absolute_tolerance Controls the MIP absolute tolerance double 0.0001 mip_heuristics_only Controls if only the GPU heuristics should be run boolean 0 mip_integrality_tolerance Controls the MIP integrality tolerance double 1.00E-05 mip_relative_gap Controls the relative tolerance used to terminate the MIP solve (default GAMS OptCR) double 1.00E-05 mip_relative_tolerance Controls the MIP relative tolerance double 0.0001 mip_scaling Controls if scaling should be applied to the MIP problem boolean 1 Solution quality No single algorithm is universally optimal for all problem types. Within cuOpt, PDLP is well suited for large-scale, memory-intensive problems due to its ability to avoid explicit matrix factorizations. In contrast, Dual Simplex is better suited for smaller to medium-sized problems, where it can provide high-quality basic solutions.\nUsers can customize operations based on specific problem characteristics and desired solution qualities, thanks to a wide array of available options. GAMS/Examiner further aids users by providing an unbiased assessment of solution quality, verifying the validity of a solver\u0026rsquo;s reported optimal solution by checking for primal feasibility, dual feasibility, and optimality.\nGAMS/Examiner is used \u0026ldquo;like a solver\u0026rdquo; and can be parameterized to use any solver available with GAMS and GAMSPy as a subsolver.\nRunning Examiner with cuOpt as subsolver The Examiner options subsolver and subsolveropt allow to specify a subsolver and activate its corresponding option file. For the Examiner example we utilize cuOpt options method 1 and crossover 0.\nThe following code snippets illustrate one way to set these solver options in both GAMS and GAMSPy.\nGAMS [...] * Write GAMS/Examiner option file examiner.opt and instruct to use cuOpt as subsolver with a GAMS/cuOpt option file* file opt_examiner / examiner.opt /; putclose opt_examiner \u0026#39;subsolver cuopt\u0026#39; / \u0026#39;subsolveropt 1\u0026#39; ; * Write GAMS/cuOpt option file cuopt.opt and instruct to use* * PDLP without crossover* file opt_cuopt / cuopt.opt /; putclose opt_cuopt \u0026#39;method 1\u0026#39; / \u0026#39;crossover 0\u0026#39; ; option solver =examiner, optFile=1; solve myModel maximizing myObjective using lp; [...] GAMSPy [...] #Write cuOpt option file in the container\u0026#39;s working directory myContainer.writeSolverOptions(\u0026#34;cuopt\u0026#34;, {\u0026#34;method\u0026#34;: 1, \u0026#34;crossover\u0026#34;: 0}) #set examiner as solver and define examiner options myModel.solve(solver=\u0026#34;examiner\u0026#34;, solver_options={\u0026#34;subsolver\u0026#34;: \u0026#34;cuopt\u0026#34;, \u0026#34;subsolveropt\u0026#34;: \u0026#34;1\u0026#34;}) [...] When solving the indus89 model from the GAMS model library using Examiner and cuOpt, Examiner analyzes the solver\u0026rsquo;s solution and generates the following report. Note that we asked cuOpt to solve this problem with the default tolerance settings of 1e-4, which are likely to generate constraint violations due to the accuracy level resulting from this tolerance value. To reduce or completely eliminate these violations, we will also discuss additional steps and features provided by cuOpt below.\n[...] Status: Optimal Objective: 1.15093789e+05 Iterations: 673080 Time: 42.146s Subsolver cuopt returns modstat 1, solstat 1. SolvPoint - solver-provided levels \u0026amp; marginals: Scale range (observed but not applied): [1,999999] Maximum element: row=objn, col=artwater(nwfp,fresh,jan): Aij = 999999 Primal variable bounds satisfied (tol = 1e-06) Dual variable bounds satisfied (tol = 1e-06) Primal infeasible w.r.t. constraints (tol = 1e-06): Max violation: watalcz.l(srws,saline,sep): 0 \u0026lt;= 1.70757 \u0026lt;= 0 2-Norm of violation: 5.54794 Dual constraints satisfied (tol = 1e-06) Primal CS is nonzero (tol = 1e-07): Max violation: canaldiv.l(44-ful,apr): 0.1021 \u0026lt;= 0.106234 \u0026lt;= 0.894 canaldiv.m(44-ful,apr): -INF \u0026lt;= 2.561 \u0026lt;= +INF Dual CS is nonzero (tol = 1e-07): Max violation: watalcz.m(srws,saline,sep): -INF \u0026lt;= 0.00933685 \u0026lt;= +INF watalcz.l(srws,saline,sep): 0 \u0026lt;= 1.70757 \u0026lt;= 0 Model attributes OK (tol = 1e-06) As expected, the returned primal solution reveals significant constraint violations (max violation 1.70757 and 2-Norm of violations 5.54794). If this is unacceptable, mitigation strategies could for example be to reduce the relative primal tolerance by setting GAMS/cuOpt option \u0026lsquo;relative_primal_tolerance 1e-6\u0026rsquo; (default 1e-4) or to enable crossover by setting GAMS/cuOpt option \u0026lsquo;crossover 1\u0026rsquo;.\nWith relative_primal_tolerance 1e-6 Examiner returns\n[...] Status: Optimal Objective: 1.14889661e+05 Iterations: 898280 Time: 61.183s Subsolver cuopt returns modstat 1, solstat 1. SolvPoint - solver-provided levels \u0026amp; marginals: Scale range (observed but not applied): [1,999999] Maximum element: row=objn, col=artwater(nwfp,fresh,jan): Aij = 999999 Primal variable bounds satisfied (tol = 1e-06) Dual variable bounds satisfied (tol = 1e-06) Primal infeasible w.r.t. constraints (tol = 1e-06): Max violation: demnat.l(prw,buff-milk): 0 \u0026lt;= -0.057 \u0026lt;= 1e+299 2-Norm of violation: 0.0958988 Dual constraints satisfied (tol = 1e-06) Primal CS is nonzero (tol = 1e-07): Max violation: x.l(psw,saline,sc-mill,bullock,standard,standard): 0 \u0026lt;= 216.779 \u0026lt;= +INF x.m(psw,saline,sc-mill,bullock,standard,standard): -INF \u0026lt;= -0.158928 \u0026lt;= 0 Dual CS is nonzero (tol = 1e-07): Max violation: demnat.m(prw,basmati): -INF \u0026lt;= -2.4166 \u0026lt;= 0 demnat.l(prw,basmati): 0 \u0026lt;= 0.0619937 \u0026lt;= +INF Model attributes OK (tol = 1e-06) As expected, we can observe that a tighter tolerance results in a smaller violation but also in a higher run time.\nWith \u0026lsquo;crossover 1\u0026rsquo; all violations are gone and Examiner returns\n[...] Crossover time 0.36 seconds Total time 42.69 seconds Crossover status Optimal Subsolver cuopt returns modstat 1, solstat 1. SolvPoint - solver-provided levels \u0026amp; marginals: Scale range (observed but not applied): [1,999999] Maximum element: row=objn, col=artwater(nwfp,fresh,jan): Aij = 999999 Primal variable bounds satisfied (tol = 1e-06) Dual variable bounds satisfied (tol = 1e-06) Primal constraints satisfied (tol = 1e-06) Dual constraints satisfied (tol = 1e-06) Primal CS is zero (tol = 1e-07) Dual CS is zero (tol = 1e-07) Model attributes OK (tol = 1e-06) Note: All experiments were conducted on an AWS EC g6.4xlarge instance, featuring an NVIDIA L4 Tensor Core GPU (even though this is not officially supported). Other instances supporting NVIDIA GPUs V100, A100, H100, and H200 are available upon request. This post aims to guide users on leveraging GAMS/cuOpt for their own experiments, rather than offering an exhaustive performance benchmark.\nConclusion The new GAMS/cuOpt link empowers users to integrate cuOpt directly into their existing GAMS and GAMSPy models. Whether through local installation or the scalable GAMS Engine SaaS cloud solution, users can leverage cutting-edge GPU performance without the burden of major on-premise GPU investments.\nCrucially, GAMS\u0026rsquo; robust diagnostic tools—such as Examiner—are integral to this process, providing essential checks on solution feasibility and quality. This ensures that the benefits of GPU acceleration are complemented by the precision required for practical applications.\nWe invite you to explore the potential of cuOpt with GAMS and GAMSPy for your most challenging optimization tasks. To get started with cuOpt (or any other solver) on GAMS Engine SaaS, request a free test account by contacting sales@gams.com .\nYour feedback and experiences are vital as we continue advancing large-scale decision-making. Please reach out to support@gams.com or join forum.gams.com with any questions, issues, or success stories you’d like to share.\nThe cuOpt VRP Solver is not available through GAMS and GAMSPy\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nRun gamspy show base to identify the GAMS system directory for GAMSPy.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nFor GAMS versions prior to 51.2.0, please add a “dummy” solverConfig section to the gamsconfig.yaml file:\n[...] solverConfig: - cuopt: minVersion: 49 scriptName: dummy executableName: dummy modelTypes: - LP - MIP - RMIP \u0026#160;\u0026#x21a9;\u0026#xfe0e; Note that GAMS/cuOpt options omit the \u0026ldquo;CUOPT_\u0026rdquo; prefix from the original parameter name. For example, \u0026ldquo;CUOPT_METHOD\u0026rdquo; in cuOpt documentation becomes simply \u0026lsquo;method\u0026rsquo; in a GAMS/cuOpt solver options file.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","excerpt":"For a long time, GPUs were essential for various AI applications and other high-performance compute applications but had limited impact on mathematical optimization. That’s changing.","ref":"/blog/2025/09/gpu-accelerated-optimization-with-gams-and-nvidia-cuopt/","title":"GPU-Accelerated Optimization with GAMS and NVIDIA cuOpt"},{"body":" Fine Tuning with Custom Code In the first part of this tutorial we went from a GAMSPy model to a first basic GAMS MIRO application for this gallery example. In the second part we got familiar with the Configuration Mode. Nevertheless, sometimes we want to customize our application even more. MIRO supports this via custom code, specifically in R, which allows us to go beyond the standard visualizations.\nFine Tuning with Custom Code Custom renderer Renderer Structure Placeholder Function Rendering Function A more complex renderer Custom Dashboard Dashboard Comparison with Custom Code Custom Widget From Custom Renderer To Custom Widget Custom Import and Export: Streamlining Your Data Workflow Custom Importer Custom Exporter Deployment Key Takeaways Conclusion Reference Repository Custom renderer We will start by creating a simple renderer that shows the BESS storage level at each hour. Up to this point, we only see how much power is charged or discharged (battery_power). The storage level itself can be computed by taking the cumulative sum of battery_power. In R, this is easily done with cumsum() .\nNote that if your data transformation is a simple function (e.g., a single cumulative sum), you could (and should!) do it directly in Python by creating a new output parameter, eliminating the need for a custom renderer, and directly use the pivot tool again for visualization. Here we use this example mainly to introduce custom renderers in MIRO.\nRenderer Structure First, we need to understand what the general structure of a custom renderer is in MIRO. For this we will closely follow the documentation . MIRO leverages R Shiny under the hood, which follows a two-function approach:\nPlaceholder function (server output): Where we specify the UI elements (plots, tables, etc.) and where they will be rendered. Rendering function: Where we do the data manipulation, define the reactive logic, and produce the final display. For more background on Shiny, see R Shiny\u0026rsquo;s official website .\nA typical MIRO custom renderer follows this template (using battery_power as an example):\n# Placeholder function must end with \u0026#34;Output\u0026#34; mirorenderer_\u0026lt;lowercaseSymbolName\u0026gt;Output \u0026lt;- function(id, height = NULL, options = NULL, path = NULL){ ns \u0026lt;- NS(id) } # The actual rendering must be prefixed with the keyword \u0026#34;render\u0026#34; renderMirorenderer_\u0026lt;lowercaseSymbolName\u0026gt; \u0026lt;- function(input, output, session, data, options = NULL, path = NULL, rendererEnv = NULL, views = NULL, outputScalarsFull = NULL, ...){ } If you are not using the Configuration Mode, you must save these functions in a file named mirorenderer_\u0026lt;lowercaseSymbolName\u0026gt;.R inside the renderer_\u0026lt;model_name\u0026gt; directory. However, if you are using the Configuration Mode, you can add the custom renderer directly under the Graphs by setting its charting type to Custom renderer. The Configuration Mode will automatically create the folder structure and place your R code in the correct location when you save.\nPlaceholder Function The placeholder function creates the UI elements Shiny will render. Shiny requires each element to have a unique ID, managed via the NS() function, which appends a prefix to avoid naming conflicts.\nHere\u0026rsquo;s how it works in practice:\nDefine the prefix function: First, call NS() with the renderer\u0026rsquo;s ID to create a function that we will store in a variable ns. Use the prefix function on elements: Whenever you define a new input or output element, prefix its ID with ns(). This will give each element a unique prefixed ID. In our first example, we only want to draw a single plot of the BESS storage level. Hence, we define one UI element:\n# Placeholder function mirorenderer_battery_powerOutput \u0026lt;- function(id, height = NULL, options = NULL, path = NULL) { ns \u0026lt;- NS(id) plotOutput(ns(\u0026#34;cumsumPlot\u0026#34;)) } Note that instead of writing plotOutput(\u0026quot;cumsumPlot\u0026quot;, ...), we use plotOutput(ns(\u0026quot;cumsumPlot\u0026quot;), ...) to ensure that the cumsumPlot is uniquely identified throughout the application.\nWe only have one plot here, but you can create as many UI elements as you need. To get a better overview what is possible check the R Shiny documentation, e.g. their section on Arrange Elements .\nRendering Function Next, we implement the actual renderer, which handles data manipulation and visualization. We have defined an output with the output function plotOutput() . Now we need something to render inside. For this, we assign renderPlot() to an output object inside the rendering function, which is responsible for generating the plot. Here\u0026rsquo;s an overview:\nOutput functions: These functions determine how the data is displayed, such as plotOutput(). Rendering functions: These are functions in Shiny that transform your data into visual elements, such as plots, tables, or maps. For example, renderPlot() is a reactive plot suitable for assignment to an output slot. Now we need a connection between our placeholder and the renderer. To do this, we look at the arguments the rendering function gets\ninput: Access to Shiny inputs, i.e. elements that generate data, such as sliders, text input,\u0026hellip; (input$hour). output: Controls elements that visualize data, such as plots, maps, or tables (output$cumsumPlot). session: Contains user-specific information. data: The data for the visualization is specified as an R tibble . If you\u0026rsquo;ve specified multiple datasets in your MIRO application, the data will be a named list of tibbles. Each element in this list corresponds to a GAMS symbol (data$battery_power). For more information about the other options, see the documentation .\nWe will now return to the Configuration Mode and start building our first renderer. Hopefully you have already added plotOutput(ns(\u0026quot;cumsumPlot\u0026quot;)) to the placeholder function. To get a general idea of what we are working with, let us first take a look at the data by simply printing it (print(data)) inside the renderer. If we now press Update, we still won\u0026rsquo;t see anything, because no rendering has been done yet, but if we look at the console, we will see:\n# A tibble: 24 x 6 j level marginal lower upper scale \u0026lt;chr\u0026gt; \u0026lt;dbl\u0026gt; \u0026lt;dbl\u0026gt; \u0026lt;dbl\u0026gt; \u0026lt;dbl\u0026gt; \u0026lt;dbl\u0026gt; 1 hour00 -50 0 -Inf Inf 1 2 hour01 -80 0 -Inf Inf 1 3 hour02 -90 0 -Inf Inf 1 4 hour03 -100 0 -Inf Inf 1 5 hour04 -30 0 -Inf Inf 1 6 hour05 -10 0 -Inf Inf 1 7 hour06 10 0 -Inf Inf 1 8 hour07 40 0 -Inf Inf 1 9 hour08 -40 0 -Inf Inf 1 10 hour09 0 0 -Inf Inf 1 # i 14 more rows Since we have not specified any additional data sets so far, data directly contains the variable battery_power, which is the GAMS symbol we put in the mirorender name. For our plot of the storage levels we now need the values from the level column, which we can access in R with data$level. More on subsetting tibbles can be found here .\nLet\u0026rsquo;s now finally make our first plot! First we need to calculate the data we want to plot, which we store in storage_level. The values in battery_power are from the city\u0026rsquo;s perspective; negative means charging the BESS, positive means discharging. We negate the cumulative sum to get the actual storage level. We use the standard R barplot() for visualization, but any plotting library can be used. Finally, we just need to pass this reactive plot to a render function and assign it to the appropriate output variable. The code should look like this:\nstorage_level \u0026lt;- -cumsum(data$level) output$cumsumPlot \u0026lt;- renderPlot({ barplot(storage_level) }) If you press Update again, you should get this:\nNow let\u0026rsquo;s make this graph prettier. Aside from adding a title, labels, etc., take a look at the y-axis. As you can see, it doesn\u0026rsquo;t go all the way to the top. To change this, we can set it to the maximum value of our data. But what might be more interesting is to see the current storage value compared to the maximum possible. As you may remember, this maximum storage level is also part of our optimization. So now we need to add data from other model symbols to our renderer. First go to Advanced options and then by clicking on Additional datasets to communicate with the custom renderer we will see all the symbols we can add to the renderer. Since we need the data from the scalar variable battery_storage, we add \u0026quot;_scalarsve_out\u0026quot;. Going back to the Main tab, we now need to change how we access the data, since data is no longer a single tibble, but a named list of tibbles. In the example below we use filter() and pull() to extract the desired data. Note that %\u0026gt;% is the pipe operator, which is used to pass the result of an expression or function as the input to the next function in a sequence, improving the readability and flow of your code.\nmax_storage \u0026lt;- data[[\u0026#34;_scalarsve_out\u0026#34;]] %\u0026gt;% filter(scalar == \u0026#34;battery_storage\u0026#34;) %\u0026gt;% pull(level) We will use the \u0026quot;battery_storage\u0026quot; for adding a horizontal line with abline() . Adding some more layout settings leads us to:\nClick to see the full code of the renderer mirorenderer_battery_powerOutput \u0026lt;- function(id, height = NULL, options = NULL, path = NULL) { ns \u0026lt;- NS(id) plotOutput(ns(\u0026#34;cumsumPlot\u0026#34;)) } renderMirorenderer_battery_power \u0026lt;- function(input, output, session, data, options = NULL, path = NULL, rendererEnv = NULL, views = NULL, outputScalarsFull = NULL, ...) { battery_power \u0026lt;- data$battery_power$level storage_level \u0026lt;- -cumsum(battery_power) max_storage \u0026lt;- data[[\u0026#34;_scalarsve_out\u0026#34;]] %\u0026gt;% filter(scalar == \u0026#34;battery_storage\u0026#34;) %\u0026gt;% pull(level) output$cumsumPlot \u0026lt;- renderPlot({ barplot(storage_level, col = \u0026#34;lightblue\u0026#34;, ylab = \u0026#34;Energy Capacity in kWh\u0026#34;, names.arg = data$battery_power$j, las = 2, main = \u0026#34;Storage level of the BESS\u0026#34; ) grid() abline(h = max_storage, col = \u0026#34;red\u0026#34;, lwd = 2, lty = 2) }) } By clicking Save, the Configuration Mode generates the file structure and JSON configuration automatically. Again, if you are not using the Configuration Mode, you will need to add this manually. The template can be found in the documentation .\nCongratulations you created your first renderer!\nNow that we have created our first small custom renderer, we can start working on some more complex renderers.\nA more complex renderer We are going to make a simple Sankey diagram for our power flow. We will base this renderer on our report_output variable which contains the three power variables and the load demand. It will show the current power flow at a given hour. To change the hour we will add a slider. This results in the following placeholder function:\nmirorenderer_report_outputOutput \u0026lt;- function(id, height = NULL, options = NULL, path = NULL) { ns \u0026lt;- NS(id) tagList( sliderInput(ns(\u0026#34;hour\u0026#34;), \u0026#34;Hour:\u0026#34;, min = 0, max = 23, value = 0, step = 1, ), plotly::plotlyOutput(ns(\u0026#34;sankey\u0026#34;), height = \u0026#34;100%\u0026#34;) ) } Since we just want both elements on top of each other, we use a tagList() . First we have our slider, which we give an id, again using the ns() function to prefix it. We set min, max, initial value and stepsize. Second, we have a plot for which we use plotlyOutput() , since we will be using the plotly library to generate the Sankey plot. Because plotly is not part of MIRO\u0026rsquo;s core, we must add the package to our environment. This can be done in the same way as the additional data in the Advanced options menu. This also means that we need to specify the package name explicitly using the double colon operator. Again, if you are not using the Configuration Mode, follow the documentation .\nNow that we have some placeholders, we need to fill them. Let us begin to set up our Sankey diagram. First, we need to decide which nodes we need. We will add one for the BESS, the generators, the external grid, and the city. You need to remember the order so that you can assign the links correctly later.\nnode = list( label = c(\u0026#34;BESS\u0026#34;, \u0026#34;Generators\u0026#34;, \u0026#34;External Grid\u0026#34;, \u0026#34;City\u0026#34;), color = c(\u0026#34;blue\u0026#34;, \u0026#34;green\u0026#34;, \u0026#34;red\u0026#34;, \u0026#34;black\u0026#34;), pad = 15, thickness = 20, line = list( color = \u0026#34;black\u0026#34;, width = 0.5 ) ) With the nodes defined we need to set the links. Each link has a source, a target and a value. The possible sources and targets are defined by our given nodes. We will define lists for all three and fill them based on our data.\nlink = list( source = sankey_source, target = sankey_target, value = sankey_value ) To be able to display the power value of the correct time point we need to get the hour from our slider, which we get from our input parameter.\nhour_to_display \u0026lt;- sprintf(\u0026#34;hour%02d\u0026#34;, input$hour) Note that we use sprintf() to get the same string we use to represent the hour in our GAMS symbols, so that we can filter the data for the correct hour.\nHere we need to be careful: input is a reactive variable, it automatically updates the diagram when the slider is updated. This means we need to put it in a reactive context. For example, in R you can use observe() . However, since our rendering depends on only one input and only one output, we keep it simple and place all our calculations inside renderPlotly(). We can do this because rendering functions are also observers. If you want to learn more about R Shiny\u0026rsquo;s reactive expressions, you can find a more detailed tutorial here .\nWith that figured out, we need to extract the correct power values. First we need to select the correct power type, then the current hour and add it to the links if it is not zero. Because GAMS doesn\u0026rsquo;t store zeros, we need to check if a row exists for each hour-power combination. Here you see how to do it for the battery_power:\nbattery_to_display \u0026lt;- filter(data, power_output_header == \u0026#34;battery\u0026#34;) %\u0026gt;% filter(j == hour_to_display) Click to see the other two power sources gen_to_display \u0026lt;- filter(data, power_output_header == \u0026#34;generators\u0026#34;) %\u0026gt;% filter(j == hour_to_display) extern_to_display \u0026lt;- filter(data, power_output_header == \u0026#34;external_grid\u0026#34;) %\u0026gt;% filter(j == hour_to_display) Now that we have our values, we need to add them to our link list. But remember to make sure that the value exists (here using dim() ), and for the BESS we need to keep in mind that we can have positive and negative power flows, either from the city to the BESS or the other way around! Here is a way to add the BESS links:\n# go over each source and check if they exist and if so add the corresponding link if (dim(battery_to_display)[1] != 0) { # for the battery need to check if is charged, or discharged if (battery_to_display[[\u0026#34;value\u0026#34;]] \u0026gt; 0) { sankey_source \u0026lt;- c(sankey_source, 0) sankey_target \u0026lt;- c(sankey_target, 3) sankey_value \u0026lt;- c(sankey_value, battery_to_display[[\u0026#34;value\u0026#34;]]) } else { sankey_source \u0026lt;- c(sankey_source, 3) sankey_target \u0026lt;- c(sankey_target, 0) sankey_value \u0026lt;- c(sankey_value, -battery_to_display[[\u0026#34;value\u0026#34;]]) } } Add similar code snippets for the remaining two power sources.\nClick to see the other two power sources if (dim(gen_to_display)[1] != 0) { sankey_source \u0026lt;- c(sankey_source, 1) sankey_target \u0026lt;- c(sankey_target, 3) sankey_value \u0026lt;- c(sankey_value, gen_to_display[[\u0026#34;value\u0026#34;]]) } if (dim(extern_to_display)[1] != 0) { sankey_source \u0026lt;- c(sankey_source, 2) sankey_target \u0026lt;- c(sankey_target, 3) sankey_value \u0026lt;- c(sankey_value, extern_to_display[[\u0026#34;value\u0026#34;]]) } With this, we have all the necessary components to render the Sankey diagram. We add one more small feature. Sliders can be animated quite easily in R Shiny. All you need to do is add an animate option to the sliderInput() function:\nanimate = animationOptions( interval = 1000, loop = FALSE, playButton = actionButton(\u0026#34;play\u0026#34;, \u0026#34;Play\u0026#34;, icon = icon(\u0026#34;play\u0026#34;), style = \u0026#34;margin-top: 10px;\u0026#34;), pauseButton = actionButton(\u0026#34;pause\u0026#34;, \u0026#34;Pause\u0026#34;, icon = icon(\u0026#34;pause\u0026#34;), style = \u0026#34;margin-top: 10px;\u0026#34;) ) Now we can inspect the hourly power flow between generators, the external grid, BESS, and the city. The slider animates this flow over time.\nClick to see the code of the full renderer mirorenderer_report_outputOutput \u0026lt;- function(id, height = NULL, options = NULL, path = NULL) { ns \u0026lt;- NS(id) tagList( sliderInput(ns(\u0026#34;hour\u0026#34;), \u0026#34;Hour:\u0026#34;, min = 0, max = 23, value = 0, step = 1, animate = animationOptions( interval = 1000, loop = FALSE, playButton = actionButton(\u0026#34;play\u0026#34;, \u0026#34;Play\u0026#34;, icon = icon(\u0026#34;play\u0026#34;), style = \u0026#34;margin-top: 10px;\u0026#34;), pauseButton = actionButton(\u0026#34;pause\u0026#34;, \u0026#34;Pause\u0026#34;, icon = icon(\u0026#34;pause\u0026#34;), style = \u0026#34;margin-top: 10px;\u0026#34;) ) ), # since plotly is a custom package, it is not attached by MIRO to avoid name collisions # Thus, we have to prefix functions exported by plotly via the \u0026#34;double colon operator\u0026#34;: # plotly::renderPlotly plotly::plotlyOutput(ns(\u0026#34;sankey\u0026#34;), height = \u0026#34;100%\u0026#34;) ) } renderMirorenderer_report_output \u0026lt;- function(input, output, session, data, options = NULL, path = NULL, rendererEnv = NULL, views = NULL, outputScalarsFull = NULL, ...) { # since renderPlotly (or any other render function) is also an observer we are already in an reactive context output$sankey \u0026lt;- plotly::renderPlotly({ hour_to_display \u0026lt;- sprintf(\u0026#34;hour%02d\u0026#34;, input$hour) # start with empty lists for the sankey links sankey_source \u0026lt;- list() sankey_target \u0026lt;- list() sankey_value \u0026lt;- list() # since the GAMS output is melted, first need to extract the different power sources battery_to_display \u0026lt;- filter(data, power_output_header == \u0026#34;battery\u0026#34;) %\u0026gt;% filter(j == hour_to_display) gen_to_display \u0026lt;- filter(data, power_output_header == \u0026#34;generators\u0026#34;) %\u0026gt;% filter(j == hour_to_display) extern_to_display \u0026lt;- filter(data, power_output_header == \u0026#34;external_grid\u0026#34;) %\u0026gt;% filter(j == hour_to_display) # go over each source and check if they exist and if so add the corresponding link if (dim(battery_to_display)[1] != 0) { # for the battery need to check if is charged, or discharged if (battery_to_display[[\u0026#34;value\u0026#34;]] \u0026gt; 0) { sankey_source \u0026lt;- c(sankey_source, 0) sankey_target \u0026lt;- c(sankey_target, 3) sankey_value \u0026lt;- c(sankey_value, battery_to_display[[\u0026#34;value\u0026#34;]]) } else { sankey_source \u0026lt;- c(sankey_source, 3) sankey_target \u0026lt;- c(sankey_target, 0) sankey_value \u0026lt;- c(sankey_value, -battery_to_display[[\u0026#34;value\u0026#34;]]) } } if (dim(gen_to_display)[1] != 0) { sankey_source \u0026lt;- c(sankey_source, 1) sankey_target \u0026lt;- c(sankey_target, 3) sankey_value \u0026lt;- c(sankey_value, gen_to_display[[\u0026#34;value\u0026#34;]]) } if (dim(extern_to_display)[1] != 0) { sankey_source \u0026lt;- c(sankey_source, 2) sankey_target \u0026lt;- c(sankey_target, 3) sankey_value \u0026lt;- c(sankey_value, extern_to_display[[\u0026#34;value\u0026#34;]]) } # finally generate the sankey diagram using plotly plotly::plot_ly( type = \u0026#34;sankey\u0026#34;, orientation = \u0026#34;h\u0026#34;, node = list( label = c(\u0026#34;BESS\u0026#34;, \u0026#34;Generators\u0026#34;, \u0026#34;External Grid\u0026#34;, \u0026#34;City\u0026#34;), color = c(\u0026#34;blue\u0026#34;, \u0026#34;green\u0026#34;, \u0026#34;red\u0026#34;, \u0026#34;black\u0026#34;), pad = 15, thickness = 20, line = list( color = \u0026#34;black\u0026#34;, width = 0.5 ) ), link = list( source = sankey_source, target = sankey_target, value = sankey_value ) ) }) } Hopefully you now have a better idea of what is possible with custom renderers and how to easily use the Configuration Mode to implement them.\nCustom Dashboard Now that we know so much more about custom renderers, let us embed custom code in our dashboard . We will add the simple renderer for the storage level of the BESS. We follow the documentation closely for this. To add custom code to the renderer, we no longer just use json, but we use the dashboard as a custom renderer. The dashboard renderer has been prepared to do this with minimal effort.\nDownload the latest dashboard renderer file from the GAMS MIRO repository on GitHub and put it with the other renderers in your renderer_\u0026lt;model_name\u0026gt; directory.\nIn the dashboard.R file, make the following changes:\n- dashboardOutput \u0026lt;- function(id, height = NULL, options = NULL, path = NULL) { + mirorenderer__scalarsve_outOutput \u0026lt;- function(id, height = NULL, options = NULL, path = NULL) { ns \u0026lt;- NS(id) ... } - renderDashboard \u0026lt;- function(id, data, options = NULL, path = NULL, rendererEnv = NULL, views = NULL, outputScalarsFull = NULL, ...) { + renderMirorenderer__scalarsve_out \u0026lt;- function(input, output, session, data, options = NULL, path = NULL, rendererEnv = NULL, views = NULL, outputScalarsFull = NULL, ...) { - moduleServer( - id, - function(input, output, session) { ns \u0026lt;- session$ns ... # These are the last three lines of code in the file - } -) } Remember that the dashboard is rendered for the symbol \u0026quot;_scalarsve_out\u0026quot;. As with the other renderers, be sure to replace it with the symbol name you want to render if you create a dashboard for a different symbol.\nIn the dataRendering section of the \u0026lt;model_name\u0026gt;.json file change the \u0026quot;outType\u0026quot; of the symbol to render from \u0026quot;dashboard\u0026quot; to \u0026quot;mirorenderer_\u0026lt;symbolname\u0026gt;\u0026quot; { \u0026#34;dataRendering\u0026#34;: { \u0026#34;_scalarsve_out\u0026#34;: { - \u0026#34;outType\u0026#34;: \u0026#34;dashboard\u0026#34;, + \u0026#34;outType\u0026#34;: \u0026#34;mirorenderer__scalarsve_out\u0026#34;, \u0026#34;additionalData\u0026#34;: [...], \u0026#34;options\u0026#34;: {...} } } } Now you can restart the application and have the same renderer as before, only now we can extend it with custom code!\nTo add custom code, we first need to decide where to put it. Here we will add it as a second element to the battery_power view. Note that the given title will be ignored by the custom code, so we will leave it empty.\n\u0026#34;dataViews\u0026#34;: { \u0026#34;battery_power\u0026#34;: [ {\u0026#34;BatteryTimeline\u0026#34;: \u0026#34;Charge/Discharge of the BESS\u0026#34;}, {\u0026#34;BatteryStorage\u0026#34;: \u0026#34;\u0026#34;} ], ... } In the corresponding \u0026quot;dataViewsConfig\u0026quot; section we now assign an arbitrary string, e.g. \u0026quot;BatteryStorage\u0026quot;: \u0026quot;customCode\u0026quot;, instead of a view configuration as before:\n\u0026#34;dataViewsConfig\u0026#34;: { \u0026#34;BatteryStorage\u0026#34;: \u0026#34;customCode\u0026#34;, ... } Finally, we can add the custom code. Recall that in our custom renderers, we always defined placeholders with unique IDs that were then assembled into the output variable. The view ID we just added (\u0026quot;BatteryStorage\u0026quot;) will also be added to the output variable. Now we just add our already implemented renderer to the render function (renderMirorenderer__scalarsve_out). The only thing we have to change is the output to which we assign the plot: output[[\u0026quot;BatteryStorage\u0026quot;]] \u0026lt;- renderUI(...). And remember that we are no longer in our renderer for the symbol battery_power, so battery_power is now additional data that we access with data$battery_power. However, since we have already added additional data to the renderer before, the code does not change. Just keep in mind that if the renderer you\u0026rsquo;re adding didn\u0026rsquo;t have additional data before, you\u0026rsquo;ll have to change how you access the data! To keep track, we add the new output assignment at the end of the dashboard renderer, but as long as it\u0026rsquo;s inside the renderer, the order doesn\u0026rsquo;t matter.\nrenderMirorenderer__scalarsve_out \u0026lt;- function(input, output, session, data, options = NULL, path = NULL, rendererEnv = NULL, views = NULL, outputScalarsFull = NULL, ...) { ... battery_power \u0026lt;- data$battery_power$level storage_level \u0026lt;- -cumsum(battery_power) max_storage \u0026lt;- data[[\u0026#34;_scalarsve_out\u0026#34;]] %\u0026gt;% filter(scalar == \u0026#34;battery_storage\u0026#34;) %\u0026gt;% pull(level) # corresponding to the dataView \u0026#34;BatteryStorage\u0026#34; output[[\u0026#34;BatteryStorage\u0026#34;]] \u0026lt;- renderUI({ tagList( renderPlot({ barplot(storage_level, col = \u0026#34;lightblue\u0026#34;, ylab = \u0026#34;Energy Capacity in kWh\u0026#34;, names.arg = data$battery_power$j, las = 2, main = \u0026#34;Storage level of the BESS\u0026#34; ) grid() }) ) }) } In the same way, you can create a view that\u0026rsquo;s entirely made up of custom code or include as many custom code elements as you like.\nDashboard Comparison with Custom Code You can also introduce custom renders to the dashboard comparison. Since this is quite similar to what we just did, we won\u0026rsquo;t go over it again here. If you want to include the custom renderer, simply follow the documentation . Just note that you cannot directly copy your old custom renderer; you\u0026rsquo;ll need to adapt it to the new data structure, which now includes the scenario dimension!\nCustom Widget Let\u0026rsquo;s take a closer look at another aspect of MIRO customization - creating a custom widget. Until now, our custom renderers have been for data visualization only. But for input symbols, we can also use custom code that allows you to produce input data that is sent to your model. This means that the input data for your GAMS(Py) model can be generated by interactively modifying a chart, table or other type of renderer.\nIn MIRO, each symbol tab provides both a tabular and a graphical data representation by default. If you have a custom renderer for an input symbol, you would typically switch to the graphical view to see it. However, modifying the actual data to be sent to the model requires using the tabular view. In the following example, we will write a custom input widget that replaces the default tabular view for an input symbol. Since we have complete control over what to display in this custom widget, we can include an editable table for data manipulation as well as a visualization that updates whenever the table data changes - providing a more seamless and interactive way to prepare input for your model.\nCurrently, the Configuration Mode does not offer direct support for implementing custom input widgets, but we can create them the same way we create a custom renderer and then make a few changes to convert it into a widget.\nFirst, we develop a placeholder function that displays both a plot and a data table. For the table we will use R Shinys DataTables . To do this, you must first add DT to the additional packages and prefix the corresponding functions in the code. To define the output we use DT:DTOutput() :\nmirorenderer_timewise_load_demand_and_cost_external_grid_dataOutput \u0026lt;- function(id, height = NULL, options = NULL, path = NULL){ ns \u0026lt;- NS(id) fluidRow( column(width = 12, plotOutput(ns(\u0026#34;timeline\u0026#34;))), column(width = 12, DT::DTOutput(ns(\u0026#34;table\u0026#34;))) ) } Before we make it interactive, let\u0026rsquo;s fill in our placeholders. For the table, we assign it with DT:renderDT() , where we define the DT:datatable() and we will round our cost values with DT:formatRound() :\noutput$table \u0026lt;- DT::renderDT({ DT::datatable(data, editable = TRUE, rownames = FALSE, options = list(scrollX = TRUE)) %\u0026gt;% DT::formatRound(c(\u0026#34;cost_external_grid\u0026#34;), digits = 2L) }) Here, editable = TRUE is crucial - it allows users to modify the table entries. For the plot, we need something like this:\noutput$timeline \u0026lt;- renderPlot({ ... }) We have two variables measured in different units (load_demand in W and cost_external_grid in $), and we want to display them on the same plot. Take a look at the remaining code to see how it might be structured. One approach is to use par() and axis() to overlay two y-axes.\nClick to see the code renderMirorenderer_timewise_load_demand_and_cost_external_grid_data \u0026lt;- function(input, output, session, data, options = NULL, path = NULL, rendererEnv = NULL, views = NULL, outputScalarsFull = NULL, ...){ # return the render for the placeholder \u0026#34;table\u0026#34; output$table \u0026lt;- DT::renderDT({ DT::datatable(data, editable = TRUE, rownames = FALSE, options = list(scrollX = TRUE)) %\u0026gt;% DT::formatRound(c(\u0026#34;cost_external_grid\u0026#34;), digits = 2L) }) # return the render for the placeholder \u0026#34;timeline\u0026#34; output$timeline \u0026lt;- renderPlot({ # first extract all the needed information x \u0026lt;- data[[\u0026#34;j\u0026#34;]] y1 \u0026lt;- data[[\u0026#34;load_demand\u0026#34;]] y2 \u0026lt;- data[[\u0026#34;cost_external_grid\u0026#34;]] # set the margin for the graph par(mar = c(5, 4, 4, 5)) # first, plot the load demand plot(y1, type = \u0026#34;l\u0026#34;, col = \u0026#34;green\u0026#34;, ylab = \u0026#34;Load demand in W\u0026#34;, lwd = 3, xlab = \u0026#34;\u0026#34;, xaxt = \u0026#34;n\u0026#34;, las = 2 ) points(y1, col = \u0026#34;green\u0026#34;, pch = 16, cex = 1.5) grid() # add second plot on the same graph for the external cost par(new = TRUE) # overlay a new plot plot(y2, type = \u0026#34;l\u0026#34;, col = \u0026#34;blue\u0026#34;, axes = FALSE, xlab = \u0026#34;\u0026#34;, ylab = \u0026#34;\u0026#34;, lwd = 3 ) points(y2, col = \u0026#34;blue\u0026#34;, pch = 16, cex = 1.5) # add a new y-axis on the right for the second line axis(side = 4, las = 2) mtext(\u0026#34;External grid cost in $\u0026#34;, side = 4, line = 3) grid() # add the x values to the axis axis(side = 1, at = 1:length(x), labels = x, las = 2) legend(\u0026#34;topleft\u0026#34;, legend = c(\u0026#34;Load demand\u0026#34;, \u0026#34;External grid cost\u0026#34;), col = c(\u0026#34;green\u0026#34;, \u0026#34;blue\u0026#34;), lty = 1, lwd = 2, pch = 16 ) }) } Now you should see something like this:\nAt this point, any changes we make in the table do not reflect in the plot. To fix this, we need reactive expressions . We need to add them for each interaction that should result in an update.\nFirst, we define a variable rv for our reactiveValues .\nrv \u0026lt;- reactiveValues( timewise_input_data = NULL ) To set rv$timewise_input_data we observe() the initial data. If it changes we set our reactive value to the data.\nobserve({ rv$timewise_input_data \u0026lt;- data }) To monitor edits to the table, we define a new observe() that will be triggered when input$table_cell_edit changes. We get the row and column index of the edited cell (input$table_cell_edit$row and input$table_cell_edit$col) and update the corresponding value in rv$timewise_input_data. The isolate() function ensures that changes to rv do not trigger this observe() function.\n# observe if the table is edited observe({ input$table_cell_edit row \u0026lt;- input$table_cell_edit$row # need to add one since the first column is the index clmn \u0026lt;- input$table_cell_edit$col + 1 isolate({ rv$timewise_input_data[row, clmn] \u0026lt;- input$table_cell_edit$value }) }) If the new value of the entry would be empty (\u0026quot;\u0026quot;), we want to reset the table. To do this, we set up a dataTableProxy() to efficiently update the table. Our resetTable() function is defined to dynamically replace the table data using the current state of rv$timewise_input_data. The function DT::replaceData() allows the table to be updated without resetting sorting, filtering, and pagination. We need to isolate() the data again so that the function is not called if rv changes!\ntableProxy \u0026lt;- DT::dataTableProxy(\u0026#34;table\u0026#34;) resetTable \u0026lt;- function() { DT::replaceData(tableProxy, isolate(rv$timewise_input_data), resetPaging = FALSE, rownames = FALSE) } Finally, we now need to reference rv$timewise_input_data instead of data in the plot renderer, so that the plot is updated whenever a table cell changes.\nClick to see the full code of the current state renderMirowidget_timewise_load_demand_and_cost_external_grid_data \u0026lt;- function(input, output, session, data, options = NULL, path = NULL, rendererEnv = NULL, views = NULL, outputScalarsFull = NULL, ...) { # The whole code is run at the beginning, even though no actions are performed yet. # init is used to only perform action in observe() after this initial run. # Therefore, it is set to TRUE in the last occurring observe() init \u0026lt;- FALSE rv \u0026lt;- reactiveValues( timewise_input_data = NULL ) # set the initial data observe({ rv$timewise_input_data \u0026lt;- data }) tableProxy \u0026lt;- DT::dataTableProxy(\u0026#34;table\u0026#34;) resetTable \u0026lt;- function() { DT::replaceData(tableProxy, isolate(rv$timewise_input_data), resetPaging = FALSE, rownames = FALSE) } # observe if the table is edited observe({ input$table_cell_edit row \u0026lt;- input$table_cell_edit$row # need to add one since the first column is the index clmn \u0026lt;- input$table_cell_edit$col + 1 # if the new value is empty, restore the value from before if (input$table_cell_edit$value == \u0026#34;\u0026#34;) { resetTable() return() } # else, update the corresponding value in the reactiveValue isolate({ rv$timewise_input_data[row, clmn] \u0026lt;- input$table_cell_edit$value }) }) # return the render for the placeholder \u0026#34;table\u0026#34; output$table \u0026lt;- DT::renderDT({ DT::datatable(rv$timewise_input_data, editable = TRUE, rownames = FALSE, options = list(scrollX = TRUE)) %\u0026gt;% DT::formatRound(c(\u0026#34;cost_external_grid\u0026#34;), digits = 2L) }) # return the render for the placeholder \u0026#34;timeline\u0026#34; output$timeline \u0026lt;- renderPlot({ # first extract all the needed information x \u0026lt;- rv$timewise_input_data[[\u0026#34;j\u0026#34;]] y1 \u0026lt;- rv$timewise_input_data[[\u0026#34;load_demand\u0026#34;]] y2 \u0026lt;- rv$timewise_input_data[[\u0026#34;cost_external_grid\u0026#34;]] # set the margin for the graph par(mar = c(5, 4, 4, 5)) # first, plot the load demand plot(y1, type = \u0026#34;l\u0026#34;, col = \u0026#34;green\u0026#34;, ylab = \u0026#34;Load demand in W\u0026#34;, lwd = 3, xlab = \u0026#34;\u0026#34;, xaxt = \u0026#34;n\u0026#34;, las = 2 ) points(y1, col = \u0026#34;green\u0026#34;, pch = 16, cex = 1.5) grid() # add second plot on the same graph for the external cost par(new = TRUE) # overlay a new plot plot(y2, type = \u0026#34;l\u0026#34;, col = \u0026#34;blue\u0026#34;, axes = FALSE, xlab = \u0026#34;\u0026#34;, ylab = \u0026#34;\u0026#34;, lwd = 3 ) points(y2, col = \u0026#34;blue\u0026#34;, pch = 16, cex = 1.5) # add a new y-axis on the right for the second line axis(side = 4, las = 2) mtext(\u0026#34;External grid cost in $\u0026#34;, side = 4, line = 3) grid() # add the x values to the axis axis(side = 1, at = 1:length(x), labels = x, las = 2) legend(\u0026#34;topleft\u0026#34;, legend = c(\u0026#34;Load demand\u0026#34;, \u0026#34;External grid cost\u0026#34;), col = c(\u0026#34;green\u0026#34;, \u0026#34;blue\u0026#34;), lty = 1, lwd = 2, pch = 16 ) }) } After these changes, we have a reactive table-plot combination, but it still behaves like an output renderer. We need to take a few final steps to turn this into a custom input widget so that the new data can be used to solve!\nFrom Custom Renderer To Custom Widget To turn the renderer into a widget, we save our renderer in the Configuration Mode and go to the directory where it was saved. Here we first need to change the name of the file to \u0026ldquo;mirowidget_timewise_load_demand_and_cost_external_grid_data.R\u0026rdquo; Now we need to rename the functions:\n- mirorenderer_timewise_load_demand_and_cost_external_grid_dataOutput \u0026lt;- function(id, height = NULL, options = NULL, path = NULL){ + mirowidget_timewise_load_demand_and_cost_external_grid_dataOutput \u0026lt;- function(id, height = NULL, options = NULL, path = NULL) { ... } - renderMirorenderer_timewise_load_demand_and_cost_external_grid_data \u0026lt;- function(input, output, session, data, options = NULL, path = NULL, rendererEnv = NULL, views = NULL, outputScalarsFull = NULL, ...){ + renderMirowidget_timewise_load_demand_and_cost_external_grid_data \u0026lt;- function(input, output, session, data, options = NULL, path = NULL, rendererEnv = NULL, views = NULL, outputScalarsFull = NULL, ...) { ... } We need to make some small changes to our code. The data parameter is no longer a tibble, but a reactive expression (data()). Therefore, we need to call it to retrieve the current tibble with our input data. Whenever the data changes (for example, because the user uploaded a new CSV file), the reactive expression is triggered, which in turn causes our table to be re-rendered with the new data.\n# set the initial data observe({ - rv$timewise_input_data \u0026lt;- data + rv$timewise_input_data \u0026lt;- data() }) All code is executed when the application is started, even though no actions have been performed yet. The init is used to execute actions in observe() only after this initial execution. It ensures that the reactive logic is not executed until the application is fully initialized and all data is loaded. Since we only have one observe() we set it to True here, if we had more we would set init to True in the last observe() block.\nif (!init) { init \u0026lt;\u0026lt;- TRUE return() } Now, we need to return the input data to be passed to GAMS(Py). For this, we provide a reactive() wrapper around rv$timewise_input_data. It ensures that the current state of the data is available as a reactive output, allowing us to pass the new data to the model. Otherwise Solve model would still use the old data!\nreturn(reactive({ rv$timewise_input_data })) Click to see the full code renderMirowidget_timewise_load_demand_and_cost_external_grid_data \u0026lt;- function(input, output, session, data, options = NULL, path = NULL, rendererEnv = NULL, views = NULL, outputScalarsFull = NULL, ...) { # The whole code is run at the beginning, even though no actions are performed yet. # init is used to only perform action in observe() after this initial run. # Therefore, it is set to TRUE in the last occurring observe() init \u0026lt;- FALSE rv \u0026lt;- reactiveValues( timewise_input_data = NULL ) # set the initial data observe({ rv$timewise_input_data \u0026lt;- data() }) tableProxy \u0026lt;- DT::dataTableProxy(\u0026#34;table\u0026#34;) resetTable \u0026lt;- function() { DT::replaceData(tableProxy, isolate(rv$timewise_input_data), resetPaging = FALSE, rownames = FALSE) } # observe if the table is edited observe({ input$table_cell_edit if (!init) { init \u0026lt;\u0026lt;- TRUE return() } row \u0026lt;- input$table_cell_edit$row # need to add one since the first column is the index clmn \u0026lt;- input$table_cell_edit$col + 1 # if the new value is empty, restore the value from before if (input$table_cell_edit$value == \u0026#34;\u0026#34;) { resetTable() return() } # else, update the corresponding value in the reactiveValue isolate({ rv$timewise_input_data[row, clmn] \u0026lt;- input$table_cell_edit$value }) }) # return the render for the placeholder \u0026#34;table\u0026#34; output$table \u0026lt;- DT::renderDT({ DT::datatable(rv$timewise_input_data, editable = TRUE, rownames = FALSE, options = list(scrollX = TRUE)) %\u0026gt;% DT::formatRound(c(\u0026#34;cost_external_grid\u0026#34;), digits = 2L) }) # return the render for the placeholder \u0026#34;timeline\u0026#34; output$timeline \u0026lt;- renderPlot({ # first extract all the needed information x \u0026lt;- rv$timewise_input_data[[\u0026#34;j\u0026#34;]] y1 \u0026lt;- rv$timewise_input_data[[\u0026#34;load_demand\u0026#34;]] y2 \u0026lt;- rv$timewise_input_data[[\u0026#34;cost_external_grid\u0026#34;]] # set the margin for the graph par(mar = c(5, 4, 4, 5)) # first, plot the load demand plot(y1, type = \u0026#34;l\u0026#34;, col = \u0026#34;green\u0026#34;, ylab = \u0026#34;Load demand in W\u0026#34;, lwd = 3, xlab = \u0026#34;\u0026#34;, xaxt = \u0026#34;n\u0026#34;, las = 2 ) points(y1, col = \u0026#34;green\u0026#34;, pch = 16, cex = 1.5) grid() # add second plot on the same graph for the external cost par(new = TRUE) # overlay a new plot plot(y2, type = \u0026#34;l\u0026#34;, col = \u0026#34;blue\u0026#34;, axes = FALSE, xlab = \u0026#34;\u0026#34;, ylab = \u0026#34;\u0026#34;, lwd = 3 ) points(y2, col = \u0026#34;blue\u0026#34;, pch = 16, cex = 1.5) # add a new y-axis on the right for the second line axis(side = 4, las = 2) mtext(\u0026#34;External grid cost in $\u0026#34;, side = 4, line = 3) grid() # add the x values to the axis axis(side = 1, at = 1:length(x), labels = x, las = 2) legend(\u0026#34;topleft\u0026#34;, legend = c(\u0026#34;Load demand\u0026#34;, \u0026#34;External grid cost\u0026#34;), col = c(\u0026#34;green\u0026#34;, \u0026#34;blue\u0026#34;), lty = 1, lwd = 2, pch = 16 ) }) # since this is an input, need to return the final data return(reactive({ rv$timewise_input_data })) } Finally, we need to remove the renderer in \u0026lt;model_name\u0026gt;.json and instead add \u0026quot;timewise_load_demand_and_cost_external_grid_data\u0026quot; to the \u0026quot;inputWidgets\u0026quot;:\n\u0026#34;inputWidgets\u0026#34;: { ... \u0026#34;timewise_load_demand_and_cost_external_grid_data\u0026#34;: { \u0026#34;alias\u0026#34;: \u0026#34;Timeline for load demand and cost of the external grid\u0026#34;, \u0026#34;apiVersion\u0026#34;: 2, \u0026#34;options\u0026#34;: { \u0026#34;isInput\u0026#34;: true }, \u0026#34;rendererName\u0026#34;: \u0026#34;mirowidget_timewise_load_demand_and_cost_external_grid_data\u0026#34;, \u0026#34;widgetType\u0026#34;: \u0026#34;custom\u0026#34; } } Congratulations - our new custom widget combines a table and a plot, with both updating interactively. At this point, Solve model will use our updated table whenever we change values and re-run the model.\nNow that you\u0026rsquo;ve mastered the basics of custom renderers in MIRO, you can explore more creative implementations. If you need more inspiration on what you can do with the custom renderer, take a look at the MIRO gallery , e.g. take a look at some applications with maps (TSP or VRPTW ).\nCustom Import and Export: Streamlining Your Data Workflow In any data-centric project, the ability to efficiently manage data movement is critical. While MIRO already provides a number of ways to import and export data - such as GDX, Excel, or CSV - there are many situations where you need more flexible solutions. For instance:\nYou might store data in a database and prefer not to export it to CSV first. You may gather data from multiple sources and need to reformat it so MIRO recognizes the correct symbol names. Custom import and export functions handle these scenarios by allowing you to:\nWork directly with databases or other file types. Perform pre- or post-processing steps within MIRO. Custom Importer Here, we will go over the basic concept to give you a good starting point for extending it to your needs. Again, we follow the documentation closely. First, let\u0026rsquo;s create a simple import function that gets the data for our generators. For ease of setup, we will just pretend to access a database and actually hardcode the data here.\nFor our custom importer, we need to create a new file in the renderer_\u0026lt;model_name\u0026gt; directory called miroimport.R. Here you can add several import functions, which should have the following signature:\nmiroimport_\u0026lt;importerName\u0026gt; \u0026lt;- function(symNames, localFile = NULL, views = NULL, attachments = NULL, metadata = NULL, customRendererDir = NULL, ...) { } Here we will only go over the parameters we will be using, for information on the others see the documentation . The \u0026quot;symNames\u0026quot; parameter is a character vector that specifies the names of the symbols for which data is to be retrieved. There is also an optional \u0026quot;localFile\u0026quot; parameter, which is a data frame containing one row for each uploaded file. What kind of data you can upload here is specified in \u0026lt;model_name\u0026gt;.json.\nWe also need to add the importer to the \u0026lt;model_name\u0026gt;.json, to do this we simply add a new key \u0026quot;customDataImport\u0026quot;:\n\u0026#34;customDataImport\u0026#34;: [ { \u0026#34;label\u0026#34;: \u0026#34;Gen specs import\u0026#34;, \u0026#34;functionName\u0026#34;: \u0026#34;miroimport_\u0026lt;importerName\u0026gt;\u0026#34;, \u0026#34;symNames\u0026#34;: [\u0026#34;generator_specifications\u0026#34;] } ] Where we simply specify the \u0026quot;label\u0026quot; the importer will have when you select it under Load data in the MIRO application. \u0026quot;functionName\u0026quot; specifies the name of our custom import function in miroimport.R. And \u0026quot;symNames\u0026quot; specifying which GAMS symbols the importer handles.\nIf you want to allow the user to upload files, you need to add \u0026quot;localFileInput\u0026quot;, which could look like this\n\u0026#34;customDataImport\u0026#34;: [ { ... \u0026#34;localFileInput\u0026#34;: { \u0026#34;label\u0026#34;: \u0026#34;Please upload your JSON file here\u0026#34;, \u0026#34;multiple\u0026#34;: false, \u0026#34;accept\u0026#34;: [\u0026#34;.json\u0026#34;, \u0026#34;application/json\u0026#34;] } } ] For more information on the available options, see the documentation .\nNow we can start our MIRO application and use the importer, but since we haven\u0026rsquo;t filled it with code yet, nothing happens. So let\u0026rsquo;s define miroimport_GenSpecs() to return a tibble with new generator specifications. This is done by returning a named list where the names correspond to the given \u0026quot;symbolNames\u0026quot;. Here we will simply hardcode it to return the same data as before, just changing the names to see that it actually imported the new data.\nmiroimport_GenSpecs \u0026lt;- function(symbolNames, localFile = NULL, views = NULL, attachments = NULL, metadata = NULL, customRendererDir = NULL, ...) { # Let\u0026#39;s say this is your result generator_specifications \u0026lt;- tibble( i = c(\u0026#34;gen3\u0026#34;, \u0026#34;gen4\u0026#34;, \u0026#34;gen5\u0026#34;), cost_per_unit = c(1.1, 1.3, 0.9), fixed_cost = c(220, 290, 200), min_power_output = c(50, 80, 10), max_power_output = c(100, 190, 70), min_up_time = c(4, 4, 4), min_down_time = c(2, 2, 2) ) # Now all you need to do is save the import symbols to a named list. import_data \u0026lt;- list(\u0026#34;generator_specifications\u0026#34; = generator_specifications) # And return the data to the MIRO application. return(import_data) } After saving, we can reload MIRO and select Gen specs import under Load data. The generator names will update accordingly, proving our custom code works. Although this example is hardcoded, the same framework can fetch data from any source, fix column names to fit MIRO\u0026rsquo;s symbols (stored in \u0026quot;symbolNames\u0026quot;), or perform more complicated transformations such as database queries.\nIn a real scenario with database queries, you\u0026rsquo;ll likely store credentials in a secure environment. MIRO allows you to specify environments; this is where we store our credentials. For MIRO Desktop , create a JSON file - e.g., miro-env.js - that looks like:\n{ \u0026#34;DB_USERNAME\u0026#34;: \u0026#34;User1\u0026#34;, \u0026#34;DB_PASSWORD\u0026#34;: \u0026#34;mySuperSecretPassword!\u0026#34; } Now in MIRO Desktop go to File and then to Preferences. Under Environment you can now upload the json file. You can access these credentials via Sys.getenv() inside your importer, for example:\nmiroimport_GenSpecs \u0026lt;- function(symbolNames, localFile = NULL, views = NULL, attachments = NULL, metadata = NULL, customRendererDir = NULL, ...) { # Where you get your data from depends on your data structures. # Let\u0026#39;s say we have a MySQL database that contains our generator specifications. # To gain access, we store our credentials in the environment. # Establish connection con \u0026lt;- dbConnect( RMySQL::MySQL(), dbname = \u0026#34;your_database_name\u0026#34;, host = \u0026#34;your_host_address\u0026#34;, port = 3306, user = Sys.getenv(\u0026#34;DB_USERNAME\u0026#34;), password = Sys.getenv(\u0026#34;DB_PASSWORD\u0026#34;) ) # Run a SQL query and fetch data into a data frame query_result \u0026lt;- dbGetQuery(con, \u0026#34;SELECT * FROM generator_specifications\u0026#34;) # Now all you need to do is save the import symbols to a named list. import_data \u0026lt;- list(\u0026#34;generator_specifications\u0026#34; = query_result) # And return the data to the MIRO application. return(import_data) } In the documentation you can find an example that also handles file uploads.\nBy now, you should be well-equipped to write your own custom importer that handles all the data collection and preprocessing your application requires.\nCustom Exporter A custom exporter works similarly. We need to add a miroexport.R to the renderer_\u0026lt;model_name\u0026gt; directory first, which should have the following signature:\nmiroexport_\u0026lt;exporterName\u0026gt; \u0026lt;- function(data, path = NULL, views = NULL, attachments = NULL, metadata = NULL, customRendererDir = NULL, ...) { } Where \u0026quot;data\u0026quot; is again a named list of tibbles containing all input and output symbols of the model and \u0026quot;path\u0026quot; is the path to the (temporary) file provided to the user for download (optional). This depends on how you specified it in the json file:\n{ \u0026#34;customDataExport\u0026#34;: [ { \u0026#34;label\u0026#34;: \u0026#34;Custom report export\u0026#34;, \u0026#34;functionName\u0026#34;: \u0026#34;miroexport_Markdown\u0026#34;, \u0026#34;localFileOutput\u0026#34;: { \u0026#34;filename\u0026#34;: \u0026#34;report.md\u0026#34;, \u0026#34;contentType\u0026#34;: \u0026#34;application/md\u0026#34; } } ] } Again, we need to link the \u0026quot;functionName\u0026quot;, and if we want to create an output file, we need to specify it in \u0026quot;localFileOutput\u0026quot;. Here, we\u0026rsquo;ve chosen to generate a markdown file.\nInside miroexport_Markdown(), we do whatever tasks we want, such as:\nWriting data back to a database (even the input data, since it may have changed due to the interactive nature of the application). Generating a downloadable file. Merging input parameters with output results in a custom format. Below is an example that writes a small Markdown report. Helpful functions in this case are \u0026quot;writeLines()\u0026quot; , \u0026quot;paste()\u0026quot; , \u0026quot;filter()\u0026quot; , \u0026quot;pull()\u0026quot; , \u0026quot;apply()\u0026quot; \u0026hellip;\nYour result could look something like this:\nOur final total cost is: 26635 $\nWith a battery power (delivery) rate of 130 kW and a battery energy (storage) rate of 420 kWh.\nWith the following generator specifications:\ni cost_per_unit fixed_cost min_power_output max_power_output min_up_time min_down_time gen0 1.1 220 50 100 4 2 gen1 1.3 290 80 190 4 2 gen2 0.9 200 10 70 4 2 Click to see the code for the custom exporter miroexport_Markdown \u0026lt;- function(data, path = NULL, views = NULL, attachments = NULL, metadata = NULL, customRendererDir = NULL, ...) { # First, extract the values you want to display. total_cost \u0026lt;- data[[\u0026#34;_scalars_out\u0026#34;]] %\u0026gt;% filter(scalar == \u0026#34;total_cost\u0026#34;) %\u0026gt;% pull(value) %\u0026gt;% as.numeric() %\u0026gt;% round(2) battery_delivery_rate \u0026lt;- data[[\u0026#34;_scalarsve_out\u0026#34;]] %\u0026gt;% filter(scalar == \u0026#34;battery_delivery_rate\u0026#34;) %\u0026gt;% pull(level) battery_storage \u0026lt;- data[[\u0026#34;_scalarsve_out\u0026#34;]] %\u0026gt;% filter(scalar == \u0026#34;battery_storage\u0026#34;) %\u0026gt;% pull(level) output_string \u0026lt;- paste( \u0026#34;Our final total cost is: \u0026#34;, total_cost, \u0026#34;$\\n\\nWith a battery power (delivery) rate of \u0026#34;, battery_delivery_rate, \u0026#34;kW and a battery energy (storage) rate of \u0026#34;, battery_storage, \u0026#34;kWh.\u0026#34; ) # Open a connection to the output file file_conn \u0026lt;- file(path, \u0026#34;w\u0026#34;) # Then write them to the output file. writeLines(output_string, file_conn) writeLines(\u0026#34;\\n\\n\u0026#34;, file_conn) # Let\u0026#39;s add the generator specifications used writeLines(\u0026#34;With the following generator specifications:\\n\\n\u0026#34;, file_conn) # Extract the table table \u0026lt;- data[[\u0026#34;generator_specifications\u0026#34;]] # Convert the table to a Markdown-style string # Create the header headers \u0026lt;- paste(names(table), collapse = \u0026#34; | \u0026#34;) separator \u0026lt;- paste(rep(\u0026#34;---\u0026#34;, length(table)), collapse = \u0026#34; | \u0026#34;) rows \u0026lt;- apply(table, 1, function(row) paste(row, collapse = \u0026#34; | \u0026#34;)) # Write the table to the file writeLines(paste(headers, separator, paste(rows, collapse = \u0026#34;\\n\u0026#34;), sep = \u0026#34;\\n\u0026#34;), file_conn) # Close the file connection close(file_conn) # If you also want to save the data to a database, # you can do that here as well, similar to the import function. } If your exporter uploads results back to a database, you can again use environment variables for credentials, just like in the importer.\nDeployment As a very last step, you will probably want to deploy your new shiny MIRO application. Covering deployment in detail would go beyond the scope of this tutorial, so we encourage you to read the documentation: Deployment . And when you are add it also check out GAMS MIRO Server if you are interested in running MIRO in the cloud.\nKey Takeaways Unlimited Customization: R-based renderers let you do anything from advanced plotting to building interactive features. Leverage Shiny Ecosystem: Shiny\u0026rsquo;s reactive expressions help you link user actions (sliders, clicks) with real-time graph updates. GAMS(Py) for Logic, R for Visuals: Use Python or GAMS to handle calculations; R custom renderers are perfect for specialized visual displays. Flexible Format Support: Whether CSV, Excel, JSON, or SQL queries, custom scripts can unify multiple sources or produce tailored outputs. Direct Database Access: Skip manual file conversions by pulling/pushing data straight to and from external DBs. Pre/Post Processing: Clean or transform your data automatically before it even reaches MIRO or after results are generated. Conclusion Throughout this tutorial, we have seen how MIRO empowers you to develop powerful, interactive optimization applications - from rapidly prototyping inputs and outputs to creating intuitive dashboards. We began by defining basic inputs and outputs, then explored how to use the Configuration Mode to effortlessly refine the user interface and data visualization. Going further, we looked at custom renderers to integrate additional functionality or visualization libraries in R Shiny, and even created a custom widget to give users instant feedback on their input changes.\nFinally, we addressed the importance of integrating MIRO within larger data ecosystems. By using custom import and custom export functions, you can directly connect to databases, perform preprocessing or postprocessing, and generate tailored output reports. With these tools at hand, MIRO is not merely an optimization front-end but a flexible, end-to-end platform for building and deploying sophisticated data-driven applications.\nUse these examples as a starting point for your own projects, adapting each feature - Configuration Mode, custom renderers, widgets, importers, and exporters - to suit your organization\u0026rsquo;s needs. By taking advantage of MIRO\u0026rsquo;s extensibility, you can streamline data workflows, create intuitive dashboards, and deliver robust analytical models to users across your organization.\nReference Repository If you\u0026rsquo;d like to see a fully operational version of this tutorial in action, head over to our Repository . It contains:\nA self-contained folder with the GAMSPy model setup JSON configuration files for MIRO customization Example R scripts for custom renderers, widgets, and data import/export Feel free to clone or fork the repo, adapt it for your organization\u0026rsquo;s workflows, and submit improvements via pull requests!\n","excerpt":"In the third part of our GAMS MIRO walkthrough, we will fine-tune our application with custom code. We\u0026rsquo;ll show you how to create a custom renderer, widget, and importer/exporter.","ref":"/blog/2025/07/gams-miro-walkthrough-part-3/","title":"GAMS MIRO Walkthrough Part 3"},{"body":"","excerpt":"","ref":"/authors/jhasselbring/","title":"Janina Hasselbring"},{"body":"","excerpt":"","ref":"/categories/miro/","title":"MIRO"},{"body":" Configuration Mode In the last part we went from a GAMSPy model to a first basic GAMS MIRO application for this gallery example. Now that we have a better understanding of our model and are confident that it satisfies the given constraints while providing a reasonable solution, we can begin to configure our application.\nConfiguration Mode General Settings Symbols Tables Input Widgets Graphs Scenario analysis Database management Dashboard Adding Additional Data Value Boxes Data Views Configuring Charts and Tables Dashboard Comparison Key Takeaways To do this, we will start our MIRO application in Configuration Mode .\ngamspy run miro --mode=\u0026#34;config\u0026#34; --path \u0026lt;path_to_your_MIRO_installation\u0026gt; --model \u0026lt;path_to_your_model\u0026gt; You should see the following:\nThe Configuration Mode gives us access to a wealth of out-of-the-box customization options, so we don\u0026rsquo;t need to write any code for now.\nGeneral Settings Let\u0026rsquo;s start by adjusting some general settings. We can give our application a title, add a logo, include a README, and enable loading the default scenario at startup. These are just a few of the available options. If your company has a specific CSS style, you could include it here as well. For the complete list of settings, see the General settings documentation.\nSymbols Next, we move to the Symbols section. First, we change our symbol aliases to something more intuitive. Then, assuming we might want to tweak scalar inputs often, we change the order in which the input symbols appear. Finally, in some cases, we need to mark variables or parameters as outputs only so we can use them in a custom renderer (we\u0026rsquo;ll introduce custom renderers in the next part). If such outputs are solely for backend use, we might hide them to avoid cluttering the output section.\nTables In the Tables section, we can customize the general configuration of input and output tables. In our example, this is optional - our current settings work well enough.\nInput Widgets Input widgets are all items that communicate input data with the model. We have several inputs and we will customize them in the Input Widgets section. Let\u0026rsquo;s take a look at our scalar inputs first. We can choose between sliders, drop down menus, checkboxes, or numeric inputs. Here, we\u0026rsquo;ll set them to sliders. If we don\u0026rsquo;t want to impose any restrictions on the value (minimum, maximum and increment), we would stay with numeric inputs. The best choice depends on the nature of the input data.\nFor our multidimensional inputs, tables are the only direct option in Configuration Mode. We can pick from three table types. Because our current datasets are relatively small and we don\u0026rsquo;t plan significant editing, we\u0026rsquo;ll stick with the default table. If we anticipate working with massive datasets, switching to the performance-optimized Big Data Table is wise. If you know you will be doing a lot of data slicing in your table, you should choose the Pivot Table. For more details on table types, see the documentation .\nIf these three table types aren\u0026rsquo;t sufficient for your needs, you can build a custom widget - a process we\u0026rsquo;ll see in the next part.\nGraphs Finally, let\u0026rsquo;s explore the Graphs . This is where we can experiment with data visualization. For every multidimensional symbol (input or output), we can define a default visualization. We can choose from the most common plot types or use the Pivot Table again, which we used during rapid prototyping. If we\u0026rsquo;ve already created useful views, we can now set them as defaults so that anyone opening the application immediately sees the relevant charts.\nWe won\u0026rsquo;t cover every possibility here because we looked at the Pivot tool in detail earlier. However, let\u0026rsquo;s check out a small example using value boxes for our output. First, we select a scenario (currently, only the default scenario is available). Then we pick the GAMS symbol _scalars_out: Output Scalars and choose the charting type Valuebox for scalar values. From there, we can specify the order of the value boxes, their colors, and units. After clicking Save, we launch the application in Base Mode and see something like this:\nWe can also add the views we set up in the previous section.\nIf you are looking for something specific, check out the documentation , which provides an extensive guide to all available plot types.\nEach change we make in the Configuration Mode is automatically saved to \u0026lt;model_name\u0026gt;.json. In the documentation you will find the corresponding json snippets you would need to add, but don\u0026rsquo;t worry, this is exactly what the Configuration Mode does when you save a graph!\nFinally, in the Charting Type drop down menu you will also find the Custom Renderer option, which we will talk about in the next part.\nScenario analysis MIRO has several build-in scenario comparison modes that allow to compare the input and/or output data of different model runs. While most compare modes are available out of the box, you can enable a dashboard for scenario data comparison with some app-specific configuration. We will introduce the dashboard compare in the next section. The process for setting this up will be explained after the regular dashboard renderer is introduced .\nDatabase management Finally the Configuration Mode also allows you to backup, remove or restore a database .\nSince all these configurations do not take much time, this could be your first draft for your management. Now they can get an idea of what the final product might look like, and you can go deeper and add any further customizations you need. How to do this is explained in the next part.\nDashboard You may have already noticed the Dashboard option in the Graphs section of the MIRO documentation. If we have several saved views - perhaps some combined with Key Performance Indicators (KPIs) - a dashboard can provide an organized overview of our application output.\nCreating a dashboard is not directly possible from Configuration Mode. Instead, we need to edit our \u0026lt;model_name\u0026gt;.json file. To add a dashboard, we will follow the explanation in the documentation . Here we will only discuss the parts we use, for more information check the documentation.\nBefore we modify the JSON file, we need to decide how we want the final dashboard to look. Specifically, we should choose:\nValue Boxes (Tiles): Which scalar values we want to highlight, and whether they serve as KPIs. Associated Views: Which views will be linked to each value box. Most likely, we can reuse the views we created earlier. We find our \u0026lt;model_name\u0026gt;.json file in the conf_ \u0026lt;model_name\u0026gt; directory. Here, we look for the dataRendering key - or define it if it doesn\u0026rsquo;t exist (it won\u0026rsquo;t, if you followed this tutorial). We need to pick an output symbol to serve as our main parameter, but the choice isn\u0026rsquo;t critical - we can add other symbols later as needed. We just can\u0026rsquo;t have another renderer for this specific symbol if we choose to have more output tabs than just the dashboard.\nFor this example, we\u0026rsquo;ll choose \u0026quot;_scalarsve_out\u0026quot;. This symbol contains all scalar output values of variables and equations. Because we probably won\u0026rsquo;t create an individual renderer for them, it\u0026rsquo;s a convenient symbol choice for our dashboard.\nGetting more specific, in bess.json we now need to configure three things:\nThe value boxes and whether they should display a scalar value (KPI). Which data view corresponds to which value box and which charts/tables it will contain. The individual charts/tables. Here\u0026rsquo;s the basic layout of our dashboard configuration for the symbol \u0026quot;_scalarsve_out\u0026quot;:\n{ \u0026#34;dataRendering\u0026#34;: { \u0026#34;_scalarsve_out\u0026#34;: { \u0026#34;outType\u0026#34;: \u0026#34;dashboard\u0026#34;, \u0026#34;additionalData\u0026#34;: [], \u0026#34;options\u0026#34;: { \u0026#34;valueBoxesTitle\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;valueBoxes\u0026#34;: { ... }, \u0026#34;dataViews\u0026#34;: { ... }, \u0026#34;dataViewsConfig\u0026#34;: { ... } } } }, } If we already had other renderers, they would appear under dataRendering as well, we\u0026rsquo;ll add ours in the next section.\nTo keep the code snippets concise, we will only look at the options we changed and have the full json at the end.\nAdding Additional Data Usually, we don\u0026rsquo;t immediately know every dataset we need. In this tutorial, however, we already plan to use \u0026quot;report_output\u0026quot;, \u0026quot;gen_power\u0026quot;, \u0026quot;battery_power\u0026quot; and \u0026quot;external_grid_power\u0026quot; since we already have an idea of which views we want to display. But of course you can add or remove symbols at any time. Further we will add the input symbol \u0026quot;generator_specifications\u0026quot; to easily check if the generator characteristic are fulfilled. All needed symbols are added to \u0026quot;additionalData\u0026quot;:\n\u0026#34;additionalData\u0026#34;: [\u0026#34;report_output\u0026#34;, \u0026#34;gen_power\u0026#34;, \u0026#34;battery_power\u0026#34;, \u0026#34;external_grid_power\u0026#34;, \u0026#34;generator_specifications\u0026#34;] Value Boxes In the options we can first add a title for the value boxes.\n\u0026#34;valueBoxesTitle\u0026#34;: \u0026#34;Summary indicators\u0026#34;, Let\u0026rsquo;s create six value boxes in total, but we\u0026rsquo;ll only discuss the first two in detail. Try adding the others for the ids: \u0026quot;battery_power\u0026quot;, \u0026quot;external_grid_power\u0026quot;, \u0026quot;battery_delivery_rate\u0026quot; and \u0026quot;battery_storage\u0026quot;. Each value box needs:\nA unique id (to link it to a corresponding data view, if any). An optional scalar parameter as KPI. If you don\u0026rsquo;t have a matching KPI, but still want to have the view in the dashboard, just set it to null. Style parameters (see the value box documentation for more information). \u0026#34;valueBoxes\u0026#34;: { \u0026#34;color\u0026#34;: [\u0026#34;black\u0026#34;, \u0026#34;olive\u0026#34;], \u0026#34;decimals\u0026#34;: [2, 2], \u0026#34;icon\u0026#34;: [\u0026#34;chart-simple\u0026#34;, \u0026#34;chart-simple\u0026#34;], \u0026#34;id\u0026#34;: [\u0026#34;total_cost\u0026#34;, \u0026#34;gen_power\u0026#34;], \u0026#34;noColor\u0026#34;: [true, true], \u0026#34;postfix\u0026#34;: [\u0026#34;$\u0026#34;, \u0026#34;$\u0026#34;], \u0026#34;prefix\u0026#34;: [\u0026#34;\u0026#34;, \u0026#34;\u0026#34;], \u0026#34;redPositive\u0026#34;: [false, false], \u0026#34;title\u0026#34;: [\u0026#34;Total Cost\u0026#34;, \u0026#34;Generators\u0026#34;], \u0026#34;valueScalar\u0026#34;: [\u0026#34;total_cost\u0026#34;, \u0026#34;total_cost_gen\u0026#34;] } Click to see the code for all six boxes \u0026#34;valueBoxes\u0026#34;: { \u0026#34;color\u0026#34;: [\u0026#34;black\u0026#34;, \u0026#34;olive\u0026#34;, \u0026#34;blue\u0026#34;, \u0026#34;red\u0026#34;, \u0026#34;blue\u0026#34;, \u0026#34;blue\u0026#34;], \u0026#34;decimals\u0026#34;: [2, 2, 2, 2, 2, 2], \u0026#34;icon\u0026#34;: [\u0026#34;chart-simple\u0026#34;, \u0026#34;chart-simple\u0026#34;, \u0026#34;chart-line\u0026#34;, \u0026#34;chart-line\u0026#34;, \u0026#34;bolt\u0026#34;, \u0026#34;battery-full\u0026#34;], \u0026#34;id\u0026#34;: [\u0026#34;total_cost\u0026#34;, \u0026#34;gen_power\u0026#34;, \u0026#34;battery_power\u0026#34;, \u0026#34;external_grid_power\u0026#34;, \u0026#34;battery_delivery_rate\u0026#34;, \u0026#34;battery_storage\u0026#34;], \u0026#34;noColor\u0026#34;: [true, true, true, true, true, true], \u0026#34;postfix\u0026#34;: [ \u0026#34;$\u0026#34;, \u0026#34;$\u0026#34;, \u0026#34;$\u0026#34;, \u0026#34;$\u0026#34;, \u0026#34;kW\u0026#34;, \u0026#34;kWh\u0026#34;], \u0026#34;prefix\u0026#34;: [\u0026#34;\u0026#34;, \u0026#34;\u0026#34;, \u0026#34;\u0026#34;, \u0026#34;\u0026#34;, \u0026#34;\u0026#34;, \u0026#34;\u0026#34;], \u0026#34;redPositive\u0026#34;: [ false, false, false, false, false, false], \u0026#34;title\u0026#34;: [\u0026#34;Total Cost\u0026#34;, \u0026#34;Generators\u0026#34;, \u0026#34;BESS\u0026#34;, \u0026#34;External Grid\u0026#34;, \u0026#34;Power Capacity\u0026#34;, \u0026#34;Energy Capacity\u0026#34;], \u0026#34;valueScalar\u0026#34;: [\u0026#34;total_cost\u0026#34;, \u0026#34;total_cost_gen\u0026#34;, \u0026#34;total_cost_battery\u0026#34;, \u0026#34;total_cost_extern\u0026#34;, \u0026#34;battery_delivery_rate\u0026#34;, \u0026#34;battery_storage\u0026#34;] } Data Views Next, under \u0026quot;dataViews\u0026quot;, we define which charts or tables belong to each value box. A data view is displayed when the corresponding value box is clicked on in the dashboard. Multiple charts and tables can be displayed. We only connect data views to the first four value boxes, leaving the last two without any dedicated view. This is done by simply not specifying a data view for those id\u0026rsquo;s.\nThe key of a data view (e.g. \u0026quot;battery_power\u0026quot;) must match the id of a value box in \u0026quot;valueBoxes\u0026quot;. We start each data view with the id from the corresponding value box, then we assign a list of objects to it. Each object within the list has a key (e.g., \u0026quot;BatteryTimeline\u0026quot;) that references a chart or table we will define next in \u0026quot;dataViewsConfig\u0026quot;, and as value we assign the optional title that will be displayed above the view in the dashboard. If you want to have more than one chart/table in a view, just add a second element to the object, as is done for \u0026quot;gen_power\u0026quot;.\n\u0026#34;dataViews\u0026#34;: { \u0026#34;battery_power\u0026#34;: [ {\u0026#34;BatteryTimeline\u0026#34;: \u0026#34;Charge/Discharge of the BESS\u0026#34;} ], \u0026#34;external_grid_power\u0026#34;: [ {\u0026#34;ExternalTimeline\u0026#34;: \u0026#34;Power taken from the external grid\u0026#34;} ], \u0026#34;gen_power\u0026#34;: [ {\u0026#34;GeneratorTimeline\u0026#34;: \u0026#34;Generators Timeline\u0026#34;}, {\u0026#34;GeneratorSpec\u0026#34;: \u0026#34;\u0026#34;} ], \u0026#34;total_cost\u0026#34;: [ {\u0026#34;Balance\u0026#34;: \u0026#34;Load demand fulfillment over time\u0026#34;} ] } Configuring Charts and Tables The only thing left to do is to specify the actual charts/tables to be displayed. This is also explained in detail in the documentation . The easiest way to add charts/tables is:\nCreate views in the application via the pivot tool. Save these views. Download the JSON configuration for the views (via Scenario (top right corner of the application) -\u0026gt; Edit metadata -\u0026gt; View). Copy the JSON configuration to the \u0026quot;dataViewsConfig\u0026quot; section. Most of the configuration can be copied directly. We just need to change the way we define which symbol the view is based on. It is no longer defined outside, but we will add \u0026quot;data: \u0026quot;report_output\u0026quot; to specify the symbol, otherwise MIRO will base the view on \u0026quot;_scalarsve_out\u0026quot; since that is the variable the renderer is based on. { - \u0026#34;report_output\u0026#34;: { \u0026#34;Balance\u0026#34;: { ... + \u0026#34;data\u0026#34;: \u0026#34;report_output\u0026#34;, ... } - } } The complete configuration in \u0026quot;dataViewsConfig\u0026quot; looks like this:\nClick to see the code for all four views \u0026#34;dataViewsConfig\u0026#34;: { \u0026#34;Balance\u0026#34;: { \u0026#34;aggregationFunction\u0026#34;: \u0026#34;sum\u0026#34;, \u0026#34;chartOptions\u0026#34;: { \u0026#34;multiChartOptions\u0026#34;: { \u0026#34;multiChartRenderer\u0026#34;: \u0026#34;line\u0026#34;, \u0026#34;multiChartStepPlot\u0026#34;: false, \u0026#34;showMultiChartDataMarkers\u0026#34;: false, \u0026#34;stackMultiChartSeries\u0026#34;: \u0026#34;no\u0026#34; }, \u0026#34;multiChartSeries\u0026#34;: \u0026#34;load_demand\u0026#34;, \u0026#34;showXGrid\u0026#34;: true, \u0026#34;showYGrid\u0026#34;: true, \u0026#34;singleStack\u0026#34;: false, \u0026#34;yLogScale\u0026#34;: false, \u0026#34;yTitle\u0026#34;: \u0026#34;power\u0026#34; }, \u0026#34;cols\u0026#34;: { \u0026#34;power_output_header\u0026#34;: null }, \u0026#34;data\u0026#34;: \u0026#34;report_output\u0026#34;, \u0026#34;domainFilter\u0026#34;: { \u0026#34;default\u0026#34;: null }, \u0026#34;pivotRenderer\u0026#34;: \u0026#34;stackedbar\u0026#34;, \u0026#34;rows\u0026#34;: \u0026#34;j\u0026#34;, \u0026#34;tableSummarySettings\u0026#34;: { \u0026#34;colSummaryFunction\u0026#34;: \u0026#34;sum\u0026#34;, \u0026#34;enabled\u0026#34;: false, \u0026#34;rowSummaryFunction\u0026#34;: \u0026#34;sum\u0026#34; } }, \u0026#34;BatteryTimeline\u0026#34;: { \u0026#34;aggregationFunction\u0026#34;: \u0026#34;sum\u0026#34;, \u0026#34;chartOptions\u0026#34;: { \u0026#34;showDataMarkers\u0026#34;: true, \u0026#34;showXGrid\u0026#34;: true, \u0026#34;showYGrid\u0026#34;: true, \u0026#34;stepPlot\u0026#34;: false, \u0026#34;yLogScale\u0026#34;: false, \u0026#34;yTitle\u0026#34;: \u0026#34;power\u0026#34; }, \u0026#34;data\u0026#34;: \u0026#34;battery_power\u0026#34;, \u0026#34;domainFilter\u0026#34;: { \u0026#34;default\u0026#34;: null }, \u0026#34;filter\u0026#34;: { \u0026#34;Hdr\u0026#34;: \u0026#34;level\u0026#34; }, \u0026#34;pivotRenderer\u0026#34;: \u0026#34;line\u0026#34;, \u0026#34;rows\u0026#34;: \u0026#34;j\u0026#34;, \u0026#34;tableSummarySettings\u0026#34;: { \u0026#34;colEnabled\u0026#34;: false, \u0026#34;colSummaryFunction\u0026#34;: \u0026#34;sum\u0026#34;, \u0026#34;rowEnabled\u0026#34;: false, \u0026#34;rowSummaryFunction\u0026#34;: \u0026#34;sum\u0026#34; } }, \u0026#34;ExternalTimeline\u0026#34;: { \u0026#34;aggregationFunction\u0026#34;: \u0026#34;sum\u0026#34;, \u0026#34;chartOptions\u0026#34;: { \u0026#34;showDataMarkers\u0026#34;: true, \u0026#34;showXGrid\u0026#34;: true, \u0026#34;showYGrid\u0026#34;: true, \u0026#34;stepPlot\u0026#34;: false, \u0026#34;yLogScale\u0026#34;: false, \u0026#34;yTitle\u0026#34;: \u0026#34;power\u0026#34; }, \u0026#34;data\u0026#34;: \u0026#34;external_grid_power\u0026#34;, \u0026#34;domainFilter\u0026#34;: { \u0026#34;default\u0026#34;: null }, \u0026#34;filter\u0026#34;: { \u0026#34;Hdr\u0026#34;: \u0026#34;level\u0026#34; }, \u0026#34;pivotRenderer\u0026#34;: \u0026#34;line\u0026#34;, \u0026#34;rows\u0026#34;: \u0026#34;j\u0026#34;, \u0026#34;tableSummarySettings\u0026#34;: { \u0026#34;colEnabled\u0026#34;: false, \u0026#34;colSummaryFunction\u0026#34;: \u0026#34;sum\u0026#34;, \u0026#34;rowEnabled\u0026#34;: false, \u0026#34;rowSummaryFunction\u0026#34;: \u0026#34;sum\u0026#34; } }, \u0026#34;GeneratorSpec\u0026#34;: { \u0026#34;aggregationFunction\u0026#34;: \u0026#34;sum\u0026#34;, \u0026#34;pivotRenderer\u0026#34;: \u0026#34;table\u0026#34;, \u0026#34;domainFilter\u0026#34;: { \u0026#34;default\u0026#34;: null }, \u0026#34;tableSummarySettings\u0026#34;: { \u0026#34;rowEnabled\u0026#34;: false, \u0026#34;rowSummaryFunction\u0026#34;: \u0026#34;sum\u0026#34;, \u0026#34;colEnabled\u0026#34;: false, \u0026#34;colSummaryFunction\u0026#34;: \u0026#34;sum\u0026#34; }, \u0026#34;data\u0026#34;: \u0026#34;generator_specifications\u0026#34;, \u0026#34;rows\u0026#34;:\u0026#34;i\u0026#34;, \u0026#34;cols\u0026#34;: {\u0026#34;Hdr\u0026#34;: null} }, \u0026#34;GeneratorTimeline\u0026#34;: { \u0026#34;aggregationFunction\u0026#34;: \u0026#34;sum\u0026#34;, \u0026#34;chartOptions\u0026#34;: { \u0026#34;showXGrid\u0026#34;: true, \u0026#34;showYGrid\u0026#34;: true, \u0026#34;singleStack\u0026#34;: false, \u0026#34;yLogScale\u0026#34;: false, \u0026#34;yTitle\u0026#34;: \u0026#34;power\u0026#34; }, \u0026#34;cols\u0026#34;: { \u0026#34;i\u0026#34;: null }, \u0026#34;data\u0026#34;: \u0026#34;gen_power\u0026#34;, \u0026#34;domainFilter\u0026#34;: { \u0026#34;default\u0026#34;: null }, \u0026#34;filter\u0026#34;: { \u0026#34;Hdr\u0026#34;: \u0026#34;level\u0026#34; }, \u0026#34;pivotRenderer\u0026#34;: \u0026#34;stackedbar\u0026#34;, \u0026#34;rows\u0026#34;: \u0026#34;j\u0026#34;, \u0026#34;tableSummarySettings\u0026#34;: { \u0026#34;colEnabled\u0026#34;: false, \u0026#34;colSummaryFunction\u0026#34;: \u0026#34;sum\u0026#34;, \u0026#34;rowEnabled\u0026#34;: false, \u0026#34;rowSummaryFunction\u0026#34;: \u0026#34;sum\u0026#34; } } } Finally, we end up with this dashboard:\nNow that we\u0026rsquo;ve combined multiple outputs into a single dashboard, it makes sense to hide the tabs for the individual output symbols and rename the dashboard tab for clarity (in the config mode). Just a heads up, you should keep \u0026quot;report_output\u0026quot;, we will add a custom renderer for it in the next part.\nIt is also possible to add custom code to the dashboard. However, since this requires a bit more effort and you need to know how to create a custom renderer in the first place, we will leave this for the next part.\nDashboard Comparison As mentioned before, MIRO provides three built-in scenario comparison modes, accessible under the Compare scenarios tab. The Split view comparison mode displays two scenarios side by side, showing all configured renderers for both input and output symbols - this includes the previously created dashboard. As an example, we will compare our default setting with a scenario where we set the cost of BESS to zero:\nIf you need to compare more than two scenarios, you can use the Tab view comparison mode, which organizes any number of scenarios (and their renderers) into separate tabs. Finally, the Pivot view comparison mode merges all scenario data into one pivot table for each symbol. It is the same pivot tool with its many possibilities that we have already used so much.\nIn addition to these ready-to-use comparison modes and custom compare modules , there is another one, dashboard comparison mode , which must be configured specifically for the app before it can be used. We will do this in the following.\nThe configuration of our regular dashboard renderer can be largely adopted, we just need to make some small adjustments:\nWe just configured the dashboard in the dataRendering section of the \u0026lt;model_name\u0026gt;.json file. For scenario comparison, the configuration should be placed in a separate section called compareModules.\nWhile a regular dashboard configuration applies to a single symbol, a scenario comparison is symbol-unspecific. This means that the scenario comparison has access to all input and output symbol data by default. As a result, you don\u0026rsquo;t need to manually list each symbol under additionalData. This also means that the symbol data to be used for a chart/table must be specified in each view in \u0026quot;dataViewsConfig\u0026quot; (\u0026quot;data\u0026quot; property). However, if you have followed the tutorial, this was already done for all views.\nInstead of the \u0026quot;outType\u0026quot; in the dashboard configuration, here we have a \u0026quot;type\u0026quot;: \u0026quot;dashboard\u0026quot;.\nWe also need to assign a label that will be displayed when the scenario comparison mode is selected. This label appears next to the options Split view, Tab view and Pivot view.\n{ \u0026#34;dataRendering\u0026#34;: { \u0026#34;\u0026lt;lowercase_symbolname\u0026gt;\u0026#34;: { - \u0026#34;outType\u0026#34;: \u0026#34;dashboard\u0026#34;, - \u0026#34;additionalData\u0026#34;: [], \u0026#34;options\u0026#34;: { \u0026#34;valueBoxesTitle\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;valueBoxes\u0026#34;: { ... }, \u0026#34;dataViews\u0026#34;: { ... }, \u0026#34;dataViewsConfig\u0026#34;: { ... } } } }, \u0026#34;compareModules\u0026#34;: [ { + \u0026#34;type\u0026#34;: \u0026#34;dashboard\u0026#34;, + \u0026#34;label\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;options\u0026#34;: { \u0026#34;valueBoxesTitle\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;valueBoxes\u0026#34;: { ... }, \u0026#34;dataViews\u0026#34;: { ... }, \u0026#34;dataViewsConfig\u0026#34;: { ... } } } ] } While we can copy \u0026quot;valueBoxes\u0026quot; and \u0026quot;dataViews\u0026quot; directly, we need to take a closer look at \u0026quot;dataViewsConfig\u0026quot;! As mentioned above, we need to specify what \u0026quot;data\u0026quot; the view is based on. Also, your data displayed in tables and graphs now has an additional dimension, the scenario dimension, where the scenarios to be compared are identified by name. This additional \u0026quot;_scenName\u0026quot; dimension must be added in the views under \u0026quot;dataViewsConfig\u0026quot;. If you put that dimension into the \u0026quot;cols\u0026quot; section and do not want to pre-select a scenario (but show all selected scenarios instead), leave the value at null.\n\u0026#34;dataViewsConfig\u0026#34;: { \u0026#34;SomeView\u0026#34; :{ ... \u0026#34;cols\u0026#34;: { \u0026#34;_scenName\u0026#34;: null }, ... } } The additional scenario dimension also changes the appearance of the graphs. Some visualizations that were suitable for normal output may no longer be suitable for displaying multiple scenarios. In such cases, the view configuration (distribution of dimensions in rows/cols/aggregation, etc.) can be adjusted as needed. The Pivot view comparison mode can help prepare the views, just as we prepared the views for the dashboard.\nIn the dashboard, we used stacked bar charts. If you start Compare scenarios in the Pivot view for the \u0026quot;report_output\u0026quot; symbol, it will look like this:\nAs you can see, the values for both scenarios are stacked on top of each other, so it\u0026rsquo;s no longer easy to see if the load is fulfilled. Comparing the scenarios becomes difficult. To fix this, click the icon to add a new view (or the edit button to edit an existing one). In the view settings dialog that opens, find \u0026ldquo;Group stacks by dimension\u0026rdquo; and add the scenario dimension. This will group the stacked bars by scenario.\nWe can also adjust the coloring so that the value for, e.g., \u0026quot;generators\u0026quot;, is the same across all scenarios. The \u0026ldquo;Series Styling\u0026rdquo; tab in the view menu allows to assign custom colors to individual series. So you could assign the same color to each series containing \u0026quot;generators\u0026quot;. Keep in mind that this approach is not generic as the scenario name is part of the dimensions. A generic, scenario-independent approach is to define a color pattern for all series that contain \u0026quot;generators\u0026quot;. This can be done in the JSON file itself (read more about this here ).\nThe \u0026quot;Balance\u0026quot; view could look like this:\n\u0026#34;Balance\u0026#34;: { \u0026#34;aggregationFunction\u0026#34;: \u0026#34;sum\u0026#34;, \u0026#34;chartOptions\u0026#34;: { \u0026#34;customChartColors\u0026#34;: { \u0026#34;battery\u0026#34;: [ \u0026#34;#a6cee3\u0026#34;, \u0026#34;#558FA8\u0026#34; ], \u0026#34;external_grid\u0026#34;: [ \u0026#34;#b2df8a\u0026#34;, \u0026#34;#699C26\u0026#34; ], \u0026#34;generators\u0026#34;: [ \u0026#34;#fb9a99\u0026#34;, \u0026#34;#D64A47\u0026#34; ], \u0026#34;load_demand\u0026#34;: [ \u0026#34;#fdbf6f\u0026#34;, \u0026#34;#B77E06\u0026#34; ] }, \u0026#34;groupDimension\u0026#34;: \u0026#34;_scenName\u0026#34;, \u0026#34;multiChartOptions\u0026#34;: { \u0026#34;multiChartRenderer\u0026#34;: \u0026#34;line\u0026#34;, \u0026#34;multiChartStepPlot\u0026#34;: false, \u0026#34;showMultiChartDataMarkers\u0026#34;: false, \u0026#34;stackMultiChartSeries\u0026#34;: \u0026#34;no\u0026#34; }, \u0026#34;multiChartSeries\u0026#34;: \u0026#34;load_demand\u0026#34;, \u0026#34;showXGrid\u0026#34;: true, \u0026#34;showYGrid\u0026#34;: true, \u0026#34;singleStack\u0026#34;: false, \u0026#34;yLogScale\u0026#34;: false, \u0026#34;yTitle\u0026#34;: \u0026#34;power\u0026#34; }, \u0026#34;cols\u0026#34;: { \u0026#34;_scenName\u0026#34;: null, \u0026#34;power_output_header\u0026#34;: null }, \u0026#34;data\u0026#34;: \u0026#34;report_output\u0026#34;, \u0026#34;domainFilter\u0026#34;: { \u0026#34;default\u0026#34;: null }, \u0026#34;pivotRenderer\u0026#34;: \u0026#34;stackedbar\u0026#34;, \u0026#34;rows\u0026#34;: \u0026#34;j\u0026#34;, \u0026#34;tableSummarySettings\u0026#34;: { \u0026#34;colSummaryFunction\u0026#34;: \u0026#34;sum\u0026#34;, \u0026#34;enabled\u0026#34;: false, \u0026#34;rowSummaryFunction\u0026#34;: \u0026#34;sum\u0026#34; }, \u0026#34;userFilter\u0026#34;: \u0026#34;_scenName\u0026#34; } The scenario comparison dashboard is ready! It now displays the data of all selected scenarios in the dashboard we are familiar with. The value boxes are empty by default. You can use a drop down menu above them to select a scenario from which the corresponding values are displayed. Now you can see directly how the costs of the BESS affect the use of the generators etc.\nKey Takeaways Simple Customization: Change chart defaults, rename symbols, and customize input widgets directly from the Configuration mode. Presentation-Ready: Save preferred views so end users see the best visualizations right away. Comprehensive Overview: Although configuring the dashboard requires some effort, it provides a unified view of all scenarios. Easy Comparison: Quickly compare multiple scenarios within a single dashboard for better insights. After exploring all the out-of-the-box customizations for our application, the next step is to dive into the custom code extensions that MIRO offers. This will be the focus of our third and final part, where we will demonstrate how to write custom renderers, widgets, and importer/exporter functions in R. Don\u0026rsquo;t worry if you\u0026rsquo;ve never worked with R before-we\u0026rsquo;ll introduce you to all the necessary R functions.\n","excerpt":"In the second part of our GAMS MIRO walkthrough, we will explore all the configuration options available, from changing general settings to defining default views. We\u0026rsquo;ll also demonstrate how to add an interactive dashboard to display our output.","ref":"/blog/2025/07/gams-miro-walkthrough-part-2/","title":"GAMS MIRO Walkthrough Part 2"},{"body":" From GAMSPy Model to GAMS MIRO App In this tutorial, we will explore the powerful features of GAMS MIRO to generate an application tailored to your optimization problem. Step by step, we will build the MIRO application for this gallery example.\nTo be able to follow this tutorial, we assume that you have already worked with GAMS or GAMSPy, as we will start with a given GAMSPy model. The content of the first section is GAMSPy specific, everything after that applies to both GAMS and GAMSPy. So if you are working with a GAMS model, you can check the documentation for the syntax, still it might be helpful to follow the tutorial for additional explanations. Otherwise, you just need to have MIRO installed (this tutorial is based on version 2.12.0, if you are using an older version, some of the features we will go through may be missing), and some R knowledge might help in the third part of the tutorial, but is not required. All necessary R functions will be explained, so if you have worked with a similar language before, you are good to go!\nAs already mentioned, you can start with either a GAMS or a GAMSPy implementation; we\u0026rsquo;ll be working with a GAMSPy model. Our first step will be to define the application\u0026rsquo;s inputs and outputs - this is the only part of the process that differs depending on whether you are using GAMS or GAMSPy. After that, the configuration process is the same for both.\nWe\u0026rsquo;ll start by showing you how to specify inputs and outputs in your GAMSPy model. Then we will see how to visualize data in MIRO using only these definitions. This step can be extremely helpful during model development: it allows you to quickly plot and inspect the output data to make sure your results make sense. If something doesn\u0026rsquo;t look right, you have a clear starting point for investigating potential errors in you model implementation.\nAfter we\u0026rsquo;ve covered the basics of visualization, in the second part of this tutorial we\u0026rsquo;ll move on to the Configuration Mode. Here you can configure many default settings without editing any code, making it easy to customize your application for different needs. Since built-in options are sometimes not enough, the third part of this tutorial will show you how to add custom renderers and widgets to give you maximum control over the user interface. Finally, we\u0026rsquo;ll examine advanced customization tips and tricks that can make your MIRO application even more powerful and tailored to your needs.\nFrom GAMSPy Model to GAMS MIRO App Implement the Model Model Input Model Output Effective Data Validation Using Log Files Basic Application - Rapid Prototyping Input Output Key Takeaways Implement the Model The starting point for building your MIRO application is the implementation of your model using either GAMS or GAMSPy. As mentioned, we will be using a GAMSPy model here. If you would like to see how the necessary code modifications would look in GAMS, please refer to the documentation .\nOur example model is a “Battery Energy Storage System (BESS) sizing problem,” based on an example from NAG , available on their GitHub (BESS.ipynb ). The goal is to optimize a city\u0026rsquo;s hourly energy schedule by identifying the most cost-effective combination of energy sources, which includes leveraging a BESS to store low-cost energy during off-peak hours and release it when demand is high. By assessing different storage capacities and discharge rates, the model pinpoints the configuration that minimizes overall energy costs while ensuring demand is consistently met.\nBefore diving in, we recommend that you review the mathematical description in the introduction to the finished application provided in the gallery . We will be referring directly to the variable names introduced there.\nGAMSPy model code import pandas as pd import sys from gamspy import ( Container, Alias, Equation, Model, Parameter, Sense, Set, Sum, Variable, Ord, Options, ModelStatus, SolveStatus, ) def main(): m = Container() # Generator parameters generator_specifications_input = pd.DataFrame( [ [\u0026#34;gen0\u0026#34;, 1.1, 220, 50, 100, 4, 2], [\u0026#34;gen1\u0026#34;, 1.3, 290, 80, 190, 4, 2], [\u0026#34;gen2\u0026#34;, 0.9, 200, 10, 70, 4, 2], ], columns=[ \u0026#34;i\u0026#34;, \u0026#34;cost_per_unit\u0026#34;, \u0026#34;fixed_cost\u0026#34;, \u0026#34;min_power_output\u0026#34;, \u0026#34;max_power_output\u0026#34;, \u0026#34;min_up_time\u0026#34;, \u0026#34;min_down_time\u0026#34;, ], ) # Load demand to be fulfilled by the energy management system # combine with cost external grid, to have one source of truth for the hours (Set j) timewise_load_demand_and_cost_external_grid_input = pd.DataFrame( [ [\u0026#34;hour00\u0026#34;, 200, 1.5], [\u0026#34;hour01\u0026#34;, 180, 1.0], [\u0026#34;hour02\u0026#34;, 170, 1.0], [\u0026#34;hour03\u0026#34;, 160, 1.0], [\u0026#34;hour04\u0026#34;, 150, 1.0], [\u0026#34;hour05\u0026#34;, 170, 1.0], [\u0026#34;hour06\u0026#34;, 190, 1.2], [\u0026#34;hour07\u0026#34;, 210, 1.8], [\u0026#34;hour08\u0026#34;, 290, 2.1], [\u0026#34;hour09\u0026#34;, 360, 1.9], [\u0026#34;hour10\u0026#34;, 370, 1.8], [\u0026#34;hour11\u0026#34;, 350, 1.6], [\u0026#34;hour12\u0026#34;, 310, 1.6], [\u0026#34;hour13\u0026#34;, 340, 1.6], [\u0026#34;hour14\u0026#34;, 390, 1.8], [\u0026#34;hour15\u0026#34;, 400, 1.9], [\u0026#34;hour16\u0026#34;, 420, 2.1], [\u0026#34;hour17\u0026#34;, 500, 3.0], [\u0026#34;hour18\u0026#34;, 440, 2.1], [\u0026#34;hour19\u0026#34;, 430, 1.9], [\u0026#34;hour20\u0026#34;, 420, 1.8], [\u0026#34;hour21\u0026#34;, 380, 1.6], [\u0026#34;hour22\u0026#34;, 340, 1.2], [\u0026#34;hour23\u0026#34;, 320, 1.2], ], columns=[\u0026#34;j\u0026#34;, \u0026#34;load_demand\u0026#34;, \u0026#34;cost_external_grid\u0026#34;], ) # Set i = Set( m, name=\u0026#34;i\u0026#34;, records=generator_specifications_input[\u0026#34;i\u0026#34;], description=\u0026#34;generators\u0026#34;, ) j = Set( m, name=\u0026#34;j\u0026#34;, records=timewise_load_demand_and_cost_external_grid_input[\u0026#34;j\u0026#34;], description=\u0026#34;hours\u0026#34;, ) t = Alias(m, name=\u0026#34;t\u0026#34;, alias_with=j) # Data # Generator parameters gen_cost_per_unit = Parameter( m, name=\u0026#34;gen_cost_per_unit\u0026#34;, domain=[i], records=generator_specifications_input[[\u0026#34;i\u0026#34;, \u0026#34;cost_per_unit\u0026#34;]], description=\u0026#34;cost per unit of generator i\u0026#34;, ) gen_fixed_cost = Parameter( m, name=\u0026#34;gen_fixed_cost\u0026#34;, domain=[i], records=generator_specifications_input[[\u0026#34;i\u0026#34;, \u0026#34;fixed_cost\u0026#34;]], description=\u0026#34;fixed cost of generator i\u0026#34;, ) gen_min_power_output = Parameter( m, name=\u0026#34;gen_min_power_output\u0026#34;, domain=[i], records=generator_specifications_input[[\u0026#34;i\u0026#34;, \u0026#34;min_power_output\u0026#34;]], description=\u0026#34;minimal power output of generator i\u0026#34;, ) gen_max_power_output = Parameter( m, name=\u0026#34;gen_max_power_output\u0026#34;, domain=[i], records=generator_specifications_input[[\u0026#34;i\u0026#34;, \u0026#34;max_power_output\u0026#34;]], description=\u0026#34;maximal power output of generator i\u0026#34;, ) gen_min_up_time = Parameter( m, name=\u0026#34;gen_min_up_time\u0026#34;, domain=[i], records=generator_specifications_input[[\u0026#34;i\u0026#34;, \u0026#34;min_up_time\u0026#34;]], description=\u0026#34;minimal up time of generator i\u0026#34;, ) gen_min_down_time = Parameter( m, name=\u0026#34;gen_min_down_time\u0026#34;, domain=[i], records=generator_specifications_input[[\u0026#34;i\u0026#34;, \u0026#34;min_down_time\u0026#34;]], description=\u0026#34;minimal down time of generator i\u0026#34;, ) # Battery parameters cost_bat_power = Parameter(m, \u0026#34;cost_bat_power\u0026#34;, records=1, is_miro_input=True) cost_bat_energy = Parameter(m, \u0026#34;cost_bat_energy\u0026#34;, records=2, is_miro_input=True) # Load demand and external grid load_demand = Parameter( m, name=\u0026#34;load_demand\u0026#34;, domain=[j], description=\u0026#34;load demand at hour j\u0026#34; ) cost_external_grid = Parameter( m, name=\u0026#34;cost_external_grid\u0026#34;, domain=[j], description=\u0026#34;cost of the external grid at hour j\u0026#34;, ) max_input_external_grid = Parameter( m, name=\u0026#34;max_input_external_grid\u0026#34;, records=10, description=\u0026#34;maximal power that can be imported from the external grid every hour\u0026#34;, ) # Variable # Generator gen_power = Variable( m, name=\u0026#34;gen_power\u0026#34;, type=\u0026#34;positive\u0026#34;, domain=[i, j], description=\u0026#34;Dispatched power from generator i at hour j\u0026#34;, ) gen_active = Variable( m, name=\u0026#34;gen_active\u0026#34;, type=\u0026#34;binary\u0026#34;, domain=[i, j], description=\u0026#34;is generator i active at hour j\u0026#34;, ) # Battery battery_power = Variable( m, name=\u0026#34;battery_power\u0026#34;, domain=[j], description=\u0026#34;power charged or discharged from the battery at hour j\u0026#34;, ) battery_delivery_rate = Variable( m, name=\u0026#34;battery_delivery_rate\u0026#34;, description=\u0026#34;power (delivery) rate of the battery energy system\u0026#34;, ) battery_storage = Variable( m, name=\u0026#34;battery_storage\u0026#34;, description=\u0026#34;energy (storage) rate of the battery energy system\u0026#34;, ) # External grid external_grid_power = Variable( m, name=\u0026#34;external_grid_power\u0026#34;, type=\u0026#34;positive\u0026#34;, domain=[j], description=\u0026#34;power imported from the external grid at hour j\u0026#34;, ) # Equation fulfill_load = Equation( m, name=\u0026#34;fulfill_load\u0026#34;, domain=[j], description=\u0026#34;load balance needs to be met very hour j\u0026#34;, ) gen_above_min_power = Equation( m, name=\u0026#34;gen_above_min_power\u0026#34;, domain=[i, j], description=\u0026#34;generators power should be above the minimal output\u0026#34;, ) gen_below_max_power = Equation( m, name=\u0026#34;gen_below_max_power\u0026#34;, domain=[i, j], description=\u0026#34;generators power should be below the maximal output\u0026#34;, ) gen_above_min_down_time = Equation( m, name=\u0026#34;gen_above_min_down_time\u0026#34;, domain=[i, j], description=\u0026#34;generators down time should be above the minimal down time\u0026#34;, ) gen_above_min_up_time = Equation( m, name=\u0026#34;gen_above_min_up_time\u0026#34;, domain=[i, j], description=\u0026#34;generators up time should be above the minimal up time\u0026#34;, ) battery_above_min_delivery = Equation( m, name=\u0026#34;battery_above_min_delivery\u0026#34;, domain=[j], description=\u0026#34;battery delivery rate (charge rate) above min power rate\u0026#34;, ) battery_below_max_delivery = Equation( m, name=\u0026#34;battery_below_max_delivery\u0026#34;, domain=[j], description=\u0026#34;battery delivery rate below max power rate\u0026#34;, ) battery_above_min_storage = Equation( m, name=\u0026#34;battery_above_min_storage\u0026#34;, domain=[t], description=\u0026#34;battery storage above negative energy rate (since negative power charges the battery)\u0026#34;, ) battery_below_max_storage = Equation( m, name=\u0026#34;battery_below_max_storage\u0026#34;, domain=[t], description=\u0026#34;sum over battery delivery below zero (cant deliver energy that is not stored)\u0026#34;, ) external_power_upper_limit = Equation( m, name=\u0026#34;external_power_upper_limit\u0026#34;, domain=[j], description=\u0026#34; input from the external grid is limited\u0026#34;, ) fulfill_load[j] = ( Sum(i, gen_power[i, j]) + battery_power[j] + external_grid_power[j] == load_demand[j] ) gen_above_min_power[i, j] = ( gen_min_power_output[i] * gen_active[i, j] \u0026lt;= gen_power[i, j] ) gen_below_max_power[i, j] = ( gen_power[i, j] \u0026lt;= gen_max_power_output[i] * gen_active[i, j] ) # if j=0 -\u0026gt; j.lag(1) = 0 which doesn\u0026#39;t brake the equation, # since generator is of at start, resulting in negative right side, therefore the sum is always above gen_above_min_down_time[i, j] = Sum( t.where[(Ord(t) \u0026gt;= Ord(j)) \u0026amp; (Ord(t) \u0026lt;= (Ord(j) + gen_min_down_time[i] - 1))], 1 - gen_active[i, t], ) \u0026gt;= gen_min_down_time[i] * (gen_active[i, j.lag(1)] - gen_active[i, j]) # and for up it correctly starts the check that if its turned on in the first step # it has to stay on for the min up time gen_above_min_up_time[i, j] = Sum( t.where[(Ord(t) \u0026gt;= Ord(j)) \u0026amp; (Ord(t) \u0026lt;= (Ord(j) + gen_min_up_time[i] - 1))], gen_active[i, t], ) \u0026gt;= gen_min_up_time[i] * (gen_active[i, j] - gen_active[i, j.lag(1)]) battery_above_min_delivery[j] = -battery_delivery_rate \u0026lt;= battery_power[j] battery_below_max_delivery[j] = battery_power[j] \u0026lt;= battery_delivery_rate battery_above_min_storage[t] = -battery_storage \u0026lt;= Sum( j.where[Ord(j) \u0026lt;= Ord(t)], battery_power[j] ) battery_below_max_storage[t] = Sum(j.where[Ord(j) \u0026lt;= Ord(t)], battery_power[j]) \u0026lt;= 0 external_power_upper_limit[j] = external_grid_power[j] \u0026lt;= max_input_external_grid obj = ( Sum( j, Sum(i, gen_cost_per_unit[i] * gen_power[i, j] + gen_fixed_cost[i]) + cost_external_grid[j] * external_grid_power[j], ) + cost_bat_power * battery_delivery_rate + cost_bat_energy * battery_storage ) # Solve bess = Model( m, name=\u0026#34;bess\u0026#34;, equations=m.getEquations(), problem=\u0026#34;MIP\u0026#34;, sense=Sense.MIN, objective=obj, ) bess.solve( solver=\u0026#34;CPLEX\u0026#34;, output=sys.stdout, options=Options(equation_listing_limit=1, relative_optimality_gap=0), ) if bess.solve_status not in [ SolveStatus.NormalCompletion, SolveStatus.TerminatedBySolver, ] or bess.status not in [ModelStatus.OptimalGlobal, ModelStatus.Integer]: print(\u0026#34;No solution exists for your input data.\\n\u0026#34;) raise Exception(\u0026#34;Infeasible.\u0026#34;) if __name__ == \u0026#34;__main__\u0026#34;: main() Model Input Let\u0026rsquo;s start by defining some basic inputs. You can see that we begin with three scalar parameters, each of which has the additional is_miro_input=True option in the definition:\n# Battery parameters cost_bat_power = Parameter(m, \u0026#34;cost_bat_power\u0026#34;, records=1, is_miro_input=True) cost_bat_energy = Parameter(m, \u0026#34;cost_bat_energy\u0026#34;, records=2, is_miro_input=True) # Load demand and external grid max_input_external_grid = Parameter( m, name=\u0026#34;max_input_external_grid\u0026#34;, records=10, is_miro_input=True, description=\u0026#34;maximal power that can be imported from the external grid every hour\u0026#34;, ) For the generator specifications and schedule inputs, there are a few extra steps. The model relies on two sets: one for possible generators and another for hours in which load demand must be met. Since these sets are not fixed but should be part of the input, we use Domain Forwarding - an approach where the set is implicitly defined by one parameter.\nBecause multiple parameters rely on these sets and we want a single source of truth, we need to combine them into a single table in our MIRO application (one for generator specifications, another for the schedule). To achieve this, we define an additional set for the column headers:\ngenerator_spec_header = Set( m, name=\u0026#34;generator_spec_header\u0026#34;, records=[ \u0026#34;cost_per_unit\u0026#34;, \u0026#34;fixed_cost\u0026#34;, \u0026#34;min_power_output\u0026#34;, \u0026#34;max_power_output\u0026#34;, \u0026#34;min_up_time\u0026#34;, \u0026#34;min_down_time\u0026#34;, ], ) We then create a parameter to hold all the relevant information:\ngenerator_specifications = Parameter( m, name=\u0026#34;generator_specifications\u0026#34;, domain=[i, generator_spec_header], domain_forwarding=[True, False], records=generator_specifications_input.melt( id_vars=\u0026#34;i\u0026#34;, var_name=\u0026#34;generator_spec_header\u0026#34; ), is_miro_input=True, is_miro_table=True, description=\u0026#34;specifications of each generator\u0026#34;, ) Notice that is_miro_input=True makes the parameter an input to the MIRO application, while is_miro_table=True displays the data in table format . The key detail is domain_forwarding=[True, False], which ensures that set elements for generators come from the MIRO application (the header names remain fixed, hence False). We still use our initial data to populate these specifications, but we transform it using melt() so that it matches the new format of only two columns: \u0026quot;i\u0026quot; and \u0026quot;generator_spec_header\u0026quot;.\nSince we are now forwarding the domain of set i through this table, we no longer specify its records. The same goes for any parameters that rely on i (e.g., gen_cost_per_unit). Instead, we assign them by referencing the new combined parameter:\ni = Set( m, name=\u0026#34;i\u0026#34;, - records=generator_specifications_input[\u0026#34;i\u0026#34;], description=\u0026#34;generators\u0026#34;, ) gen_cost_per_unit = Parameter( m, name=\u0026#34;gen_cost_per_unit\u0026#34;, domain=[i], - records=generator_specifications_input[[\u0026#34;i\u0026#34;, \u0026#34;cost_per_unit\u0026#34;]], description=\u0026#34;cost per unit of generator i\u0026#34;, ) + gen_cost_per_unit[i] = generator_specifications[i, \u0026#34;cost_per_unit\u0026#34;] We apply the same pattern to other parameters that depend on i. Likewise, for hour-dependent parameters (like load_demand and cost_external_grid), we create a single source of truth for the hour set by combining them into one parameter and making the same modifications.\nGiven the input, we move on to the output.\nModel Output When implementing the model, it can be helpful to flag variables as outputs by adding is_miro_output=True. After solving, we can then view the calculated variable values right away, making it easier to spot any remaining model errors.\ngen_power = Variable( m, name=\u0026#34;gen_power\u0026#34;, type=\u0026#34;positive\u0026#34;, domain=[i, j], description=\u0026#34;dispatched power from generator i at hour j\u0026#34;, is_miro_output=True, ) In general, we can designate any variable or parameter as an MIRO output. When implementing the model, it makes sense to simply define all variables as output, so you can easily visualize the results. Sometimes it makes sense to define parameters as outputs that depend on the variables. A straightforward example in our model is to create dedicated parameters for the three cost components, allowing us to display these values directly in the MIRO application:\ntotal_cost_gen = Parameter( m, \u0026#34;total_cost_gen\u0026#34;, is_miro_output=True, description=\u0026#34;total cost of the generators\u0026#34;, ) total_cost_gen[...] = Sum( j, Sum(i, gen_cost_per_unit[i] * gen_power.l[i, j] + gen_fixed_cost[i]) ) We apply this same approach for the other power sources and combine them:\nCosts for the other power sources total_cost_battery = Parameter( m, \u0026#34;total_cost_battery\u0026#34;, is_miro_output=True, description=\u0026#34;total cost of the BESS\u0026#34;, ) total_cost_battery[...] = ( cost_bat_power * battery_delivery_rate.l + cost_bat_energy * battery_storage.l ) total_cost_extern = Parameter( m, \u0026#34;total_cost_extern\u0026#34;, is_miro_output=True, description=\u0026#34;total cost for the imported power\u0026#34;, ) total_cost_extern[...] = Sum( j, cost_external_grid[j] * external_grid_power.l[j], ) total_cost = Parameter( m, \u0026#34;total_cost\u0026#34;, is_miro_output=True, description=\u0026#34;total cost to fulfill the load demand\u0026#34;, ) total_cost[...] = total_cost_gen + total_cost_battery + total_cost_extern We also combine our power variables with the load demand input into a single output parameter to later show how the sum of all power flows meets the load demand:\n# Power output power_output_header = Set( m, name=\u0026#34;power_output_header\u0026#34;, records=[\u0026#34;battery\u0026#34;, \u0026#34;external_grid\u0026#34;, \u0026#34;generators\u0026#34;, \u0026#34;load_demand\u0026#34;], ) report_output = Parameter( m, name=\u0026#34;report_output\u0026#34;, domain=[j, power_output_header], description=\u0026#34;optimal combination of incoming power flows\u0026#34;, is_miro_output=True, ) report_output[j, \u0026#34;generators\u0026#34;] = Sum(i, gen_power.l[i, j]) report_output[j, \u0026#34;battery\u0026#34;] = battery_power.l[j] report_output[j, \u0026#34;external_grid\u0026#34;] = external_grid_power.l[j] report_output[j, \u0026#34;load_demand\u0026#34;] = load_demand[j] Now, we can launch MIRO to see our first fully interactive modeling application!\ngamspy run miro --path \u0026lt;path_to_your_MIRO_installation\u0026gt; --model \u0026lt;path_to_your_model\u0026gt; After starting MIRO, the application should look like this:\nEffective Data Validation Using Log Files Finally, we will briefly discuss data validation. This is critical to ensuring the accuracy and reliability of optimization models. Log files are key to checking the consistency of input data, and generating reports on inconsistencies helps prevent errors and user frustration. Here we will only verify that our input values are all non-negative. While finding effective validation checks can be challenging, clearly identifying the constraints or values causing infeasibility can significantly improve the user experience.\nIn MIRO, you have the option to create a custom log file. However, since we are using GAMSPy, we can also directly write to stdout and log there. And if we here follow the specified MIRO log syntax , here any invalid data can be highlighted directly above the corresponding input data sheet in MIRO.\nThe syntax that must be used for MIRO to jump directly to the table with the incorrect data is as follows:\nsymbolname:: Error message Try for yourself how a simple verification of the sign of the input values might look. Keep in mind that you should validate the data before attempting to solve the model. If the validation fails, specify which value caused the failure and raise an exception, as there\u0026rsquo;s no need to solve the model in this case.\nA possible data validation no_negative_gen_spec = generator_specifications.records[generator_specifications.records[\u0026#34;value\u0026#34;] \u0026lt; 0] no_negative_load = load_demand.records[load_demand.records[\u0026#34;value\u0026#34;] \u0026lt; 0] no_negative_cost = cost_external_grid.records[ cost_external_grid.records[\u0026#34;value\u0026#34;] \u0026lt; 0 ] print( \u0026#34;\u0026#34;\u0026#34;------------------------------------\\n Validating data\\n------------------------------------\\n\u0026#34;\u0026#34;\u0026#34; ) errors = False if not no_negative_gen_spec.empty: print( \u0026#34;generator_specifications:: No negative values for the generator specifications allowed!\\n\u0026#34; ) for _, row in no_negative_gen_spec.iterrows(): print(f\u0026#39;{row[\u0026#34;i\u0026#34;]} has a negative value.\\n\u0026#39;) errors = True if not no_negative_load.empty: print( \u0026#34;timewise_load_demand_and_cost_external_grid_data:: No negative load demand allowed!\\n\u0026#34; ) for _, row in no_negative_load.iterrows(): print(f\u0026#39;{row[\u0026#34;j\u0026#34;]} has negative load demand.\\n\u0026#39;) errors = True if not no_negative_cost.empty: print( \u0026#34;timewise_load_demand_and_cost_external_grid_data:: No negative cost allowed!\\n\u0026#34; ) for _, row in no_negative_cost.iterrows(): print(f\u0026#39;{row[\u0026#34;j\u0026#34;]} has negative external grid cost.\\n\u0026#39;) errors = True if errors: raise Exception(\u0026#34;Data errors detected\u0026#34;) print(\u0026#34;Data ok\\n\u0026#34;) Full updated GAMSPy model import pandas as pd import sys from gamspy import ( Container, Alias, Equation, Model, Parameter, Sense, Set, Sum, Variable, Ord, Options, ModelStatus, SolveStatus, ) def main(): m = Container() # Generator parameters generator_specifications_input = pd.DataFrame( [ [\u0026#34;gen0\u0026#34;, 1.1, 220, 50, 100, 4, 2], [\u0026#34;gen1\u0026#34;, 1.3, 290, 80, 190, 4, 2], [\u0026#34;gen2\u0026#34;, 0.9, 200, 10, 70, 4, 2], ], columns=[ \u0026#34;i\u0026#34;, \u0026#34;cost_per_unit\u0026#34;, \u0026#34;fixed_cost\u0026#34;, \u0026#34;min_power_output\u0026#34;, \u0026#34;max_power_output\u0026#34;, \u0026#34;min_up_time\u0026#34;, \u0026#34;min_down_time\u0026#34;, ], ) # Load demand to be fulfilled by the energy management system # combine with cost external grid, to have one source of truth for the hours (Set j) timewise_load_demand_and_cost_external_grid_input = pd.DataFrame( [ [\u0026#34;hour00\u0026#34;, 200, 1.5], [\u0026#34;hour01\u0026#34;, 180, 1.0], [\u0026#34;hour02\u0026#34;, 170, 1.0], [\u0026#34;hour03\u0026#34;, 160, 1.0], [\u0026#34;hour04\u0026#34;, 150, 1.0], [\u0026#34;hour05\u0026#34;, 170, 1.0], [\u0026#34;hour06\u0026#34;, 190, 1.2], [\u0026#34;hour07\u0026#34;, 210, 1.8], [\u0026#34;hour08\u0026#34;, 290, 2.1], [\u0026#34;hour09\u0026#34;, 360, 1.9], [\u0026#34;hour10\u0026#34;, 370, 1.8], [\u0026#34;hour11\u0026#34;, 350, 1.6], [\u0026#34;hour12\u0026#34;, 310, 1.6], [\u0026#34;hour13\u0026#34;, 340, 1.6], [\u0026#34;hour14\u0026#34;, 390, 1.8], [\u0026#34;hour15\u0026#34;, 400, 1.9], [\u0026#34;hour16\u0026#34;, 420, 2.1], [\u0026#34;hour17\u0026#34;, 500, 3.0], [\u0026#34;hour18\u0026#34;, 440, 2.1], [\u0026#34;hour19\u0026#34;, 430, 1.9], [\u0026#34;hour20\u0026#34;, 420, 1.8], [\u0026#34;hour21\u0026#34;, 380, 1.6], [\u0026#34;hour22\u0026#34;, 340, 1.2], [\u0026#34;hour23\u0026#34;, 320, 1.2], ], columns=[\u0026#34;j\u0026#34;, \u0026#34;load_demand\u0026#34;, \u0026#34;cost_external_grid\u0026#34;], ) # Set i = Set( m, name=\u0026#34;i\u0026#34;, description=\u0026#34;generators\u0026#34;, ) j = Set( m, name=\u0026#34;j\u0026#34;, description=\u0026#34;hours\u0026#34;, ) t = Alias(m, name=\u0026#34;t\u0026#34;, alias_with=j) generator_spec_header = Set( m, name=\u0026#34;generator_spec_header\u0026#34;, records=[ \u0026#34;cost_per_unit\u0026#34;, \u0026#34;fixed_cost\u0026#34;, \u0026#34;min_power_output\u0026#34;, \u0026#34;max_power_output\u0026#34;, \u0026#34;min_up_time\u0026#34;, \u0026#34;min_down_time\u0026#34;, ], ) timewise_header = Set( m, name=\u0026#34;timewise_header\u0026#34;, records=[\u0026#34;load_demand\u0026#34;, \u0026#34;cost_external_grid\u0026#34;] ) # Data # Generator parameters generator_specifications = Parameter( m, name=\u0026#34;generator_specifications\u0026#34;, domain=[i, generator_spec_header], domain_forwarding=[True, False], records=generator_specifications_input.melt( id_vars=\u0026#34;i\u0026#34;, var_name=\u0026#34;generator_spec_header\u0026#34; ), is_miro_input=True, is_miro_table=True, description=\u0026#34;Specifications of each generator\u0026#34;, ) # To improve readability of the equations we extract the individual columns. # Since we want a single source of truth we combine them for MIRO. gen_cost_per_unit = Parameter( m, name=\u0026#34;gen_cost_per_unit\u0026#34;, domain=[i], description=\u0026#34;cost per unit of generator i\u0026#34;, ) gen_fixed_cost = Parameter( m, name=\u0026#34;gen_fixed_cost\u0026#34;, domain=[i], description=\u0026#34;fixed cost of generator i\u0026#34; ) gen_min_power_output = Parameter( m, name=\u0026#34;gen_min_power_output\u0026#34;, domain=[i], description=\u0026#34;minimal power output of generator i\u0026#34;, ) gen_max_power_output = Parameter( m, name=\u0026#34;gen_max_power_output\u0026#34;, domain=[i], description=\u0026#34;maximal power output of generator i\u0026#34;, ) gen_min_up_time = Parameter( m, name=\u0026#34;gen_min_up_time\u0026#34;, domain=[i], description=\u0026#34;minimal up time of generator i\u0026#34;, ) gen_min_down_time = Parameter( m, name=\u0026#34;gen_min_down_time\u0026#34;, domain=[i], description=\u0026#34;minimal down time of generator i\u0026#34;, ) gen_cost_per_unit[i] = generator_specifications[i, \u0026#34;cost_per_unit\u0026#34;] gen_fixed_cost[i] = generator_specifications[i, \u0026#34;fixed_cost\u0026#34;] gen_min_power_output[i] = generator_specifications[i, \u0026#34;min_power_output\u0026#34;] gen_max_power_output[i] = generator_specifications[i, \u0026#34;max_power_output\u0026#34;] gen_min_up_time[i] = generator_specifications[i, \u0026#34;min_up_time\u0026#34;] gen_min_down_time[i] = generator_specifications[i, \u0026#34;min_down_time\u0026#34;] # Battery parameters cost_bat_power = Parameter(m, \u0026#34;cost_bat_power\u0026#34;, records=1, is_miro_input=True) cost_bat_energy = Parameter(m, \u0026#34;cost_bat_energy\u0026#34;, records=2, is_miro_input=True) # Load demand and external grid timewise_load_demand_and_cost_external_grid_data = Parameter( m, name=\u0026#34;timewise_load_demand_and_cost_external_grid_data\u0026#34;, domain=[j, timewise_header], domain_forwarding=[True, False], records=timewise_load_demand_and_cost_external_grid_input.melt( id_vars=\u0026#34;j\u0026#34;, var_name=\u0026#34;timewise_header\u0026#34; ), is_miro_input=True, is_miro_table=True, description=\u0026#34;Timeline for load demand and cost of the external grid.\u0026#34;, ) load_demand = Parameter( m, name=\u0026#34;load_demand\u0026#34;, domain=[j], description=\u0026#34;load demand at hour j\u0026#34; ) cost_external_grid = Parameter( m, name=\u0026#34;cost_external_grid\u0026#34;, domain=[j], description=\u0026#34;cost of the external grid at hour j\u0026#34;, ) load_demand[j] = timewise_load_demand_and_cost_external_grid_data[j, \u0026#34;load_demand\u0026#34;] cost_external_grid[j] = timewise_load_demand_and_cost_external_grid_data[ j, \u0026#34;cost_external_grid\u0026#34; ] max_input_external_grid = Parameter( m, name=\u0026#34;max_input_external_grid\u0026#34;, records=10, is_miro_input=True, description=\u0026#34;maximal power that can be imported from the external grid every hour\u0026#34;, ) no_negative_gen_spec = generator_specifications.records[ generator_specifications.records[\u0026#34;value\u0026#34;] \u0026lt; 0 ] no_negative_load = load_demand.records[load_demand.records[\u0026#34;value\u0026#34;] \u0026lt; 0] no_negative_cost = cost_external_grid.records[ cost_external_grid.records[\u0026#34;value\u0026#34;] \u0026lt; 0 ] print( \u0026#34;\u0026#34;\u0026#34;------------------------------------\\n Validating data\\n------------------------------------\\n\u0026#34;\u0026#34;\u0026#34; ) errors = False if not no_negative_gen_spec.empty: print( \u0026#34;generator_specifications:: No negative values for the generator specifications allowed!\\n\u0026#34; ) for _, row in no_negative_gen_spec.iterrows(): print(f\u0026#39;{row[\u0026#34;i\u0026#34;]} has a negative value.\\n\u0026#39;) errors = True if not no_negative_load.empty: print( \u0026#34;timewise_load_demand_and_cost_external_grid_data:: No negative load demand allowed!\\n\u0026#34; ) for _, row in no_negative_load.iterrows(): print(f\u0026#39;{row[\u0026#34;j\u0026#34;]} has negative load demand.\\n\u0026#39;) errors = True if not no_negative_cost.empty: print( \u0026#34;timewise_load_demand_and_cost_external_grid_data:: No negative cost allowed!\\n\u0026#34; ) for _, row in no_negative_cost.iterrows(): print(f\u0026#39;{row[\u0026#34;j\u0026#34;]} has negative external grid cost.\\n\u0026#39;) errors = True if errors: raise Exception(\u0026#34;Data errors detected\u0026#34;) print(\u0026#34;Data ok\\n\u0026#34;) # Variable # Generator gen_power = Variable( m, name=\u0026#34;gen_power\u0026#34;, type=\u0026#34;positive\u0026#34;, domain=[i, j], description=\u0026#34;Dispatched power from generator i at hour j\u0026#34;, is_miro_output=True, ) gen_active = Variable( m, name=\u0026#34;gen_active\u0026#34;, type=\u0026#34;binary\u0026#34;, domain=[i, j], description=\u0026#34;is generator i active at hour j\u0026#34;, ) # Battery battery_power = Variable( m, name=\u0026#34;battery_power\u0026#34;, domain=[j], description=\u0026#34;power charged or discharged from the battery at hour j\u0026#34;, is_miro_output=True, ) battery_delivery_rate = Variable( m, name=\u0026#34;battery_delivery_rate\u0026#34;, description=\u0026#34;power (delivery) rate of the battery energy system\u0026#34;, is_miro_output=True, ) battery_storage = Variable( m, name=\u0026#34;battery_storage\u0026#34;, description=\u0026#34;energy (storage) rate of the battery energy system\u0026#34;, is_miro_output=True, ) # External grid external_grid_power = Variable( m, name=\u0026#34;external_grid_power\u0026#34;, type=\u0026#34;positive\u0026#34;, domain=[j], description=\u0026#34;power imported from the external grid at hour j\u0026#34;, is_miro_output=True, ) # Equation fulfill_load = Equation( m, name=\u0026#34;fulfill_load\u0026#34;, domain=[j], description=\u0026#34;load balance needs to be met very hour j\u0026#34;, ) gen_above_min_power = Equation( m, name=\u0026#34;gen_above_min_power\u0026#34;, domain=[i, j], description=\u0026#34;generators power should be above the minimal output\u0026#34;, ) gen_below_max_power = Equation( m, name=\u0026#34;gen_below_max_power\u0026#34;, domain=[i, j], description=\u0026#34;generators power should be below the maximal output\u0026#34;, ) gen_above_min_down_time = Equation( m, name=\u0026#34;gen_above_min_down_time\u0026#34;, domain=[i, j], description=\u0026#34;generators down time should be above the minimal down time\u0026#34;, ) gen_above_min_up_time = Equation( m, name=\u0026#34;gen_above_min_up_time\u0026#34;, domain=[i, j], description=\u0026#34;generators up time should be above the minimal up time\u0026#34;, ) battery_above_min_delivery = Equation( m, name=\u0026#34;battery_above_min_delivery\u0026#34;, domain=[j], description=\u0026#34;battery delivery rate (charge rate) above min power rate\u0026#34;, ) battery_below_max_delivery = Equation( m, name=\u0026#34;battery_below_max_delivery\u0026#34;, domain=[j], description=\u0026#34;battery delivery rate below max power rate\u0026#34;, ) battery_above_min_storage = Equation( m, name=\u0026#34;battery_above_min_storage\u0026#34;, domain=[t], description=\u0026#34;battery storage above negative energy rate (since negative power charges the battery)\u0026#34;, ) battery_below_max_storage = Equation( m, name=\u0026#34;battery_below_max_storage\u0026#34;, domain=[t], description=\u0026#34;sum over battery delivery below zero (cant deliver energy that is not stored)\u0026#34;, ) external_power_upper_limit = Equation( m, name=\u0026#34;external_power_upper_limit\u0026#34;, domain=[j], description=\u0026#34; input from the external grid is limited\u0026#34;, ) fulfill_load[j] = ( Sum(i, gen_power[i, j]) + battery_power[j] + external_grid_power[j] == load_demand[j] ) gen_above_min_power[i, j] = ( gen_min_power_output[i] * gen_active[i, j] \u0026lt;= gen_power[i, j] ) gen_below_max_power[i, j] = ( gen_power[i, j] \u0026lt;= gen_max_power_output[i] * gen_active[i, j] ) # if j=0 -\u0026gt; j.lag(1) = 0 which doesn\u0026#39;t brake the equation, # since generator is of at start, resulting in negative right side, therefore the sum is always above gen_above_min_down_time[i, j] = Sum( t.where[(Ord(t) \u0026gt;= Ord(j)) \u0026amp; (Ord(t) \u0026lt;= (Ord(j) + gen_min_down_time[i] - 1))], 1 - gen_active[i, t], ) \u0026gt;= gen_min_down_time[i] * (gen_active[i, j.lag(1)] - gen_active[i, j]) # and for up it correctly starts the check that if its turned on in the first step # it has to stay on for the min up time gen_above_min_up_time[i, j] = Sum( t.where[(Ord(t) \u0026gt;= Ord(j)) \u0026amp; (Ord(t) \u0026lt;= (Ord(j) + gen_min_up_time[i] - 1))], gen_active[i, t], ) \u0026gt;= gen_min_up_time[i] * (gen_active[i, j] - gen_active[i, j.lag(1)]) battery_above_min_delivery[j] = -battery_delivery_rate \u0026lt;= battery_power[j] battery_below_max_delivery[j] = battery_power[j] \u0026lt;= battery_delivery_rate battery_above_min_storage[t] = -battery_storage \u0026lt;= Sum( j.where[Ord(j) \u0026lt;= Ord(t)], battery_power[j] ) battery_below_max_storage[t] = Sum(j.where[Ord(j) \u0026lt;= Ord(t)], battery_power[j]) \u0026lt;= 0 external_power_upper_limit[j] = external_grid_power[j] \u0026lt;= max_input_external_grid obj = ( Sum( j, Sum(i, gen_cost_per_unit[i] * gen_power[i, j] + gen_fixed_cost[i]) + cost_external_grid[j] * external_grid_power[j], ) + cost_bat_power * battery_delivery_rate + cost_bat_energy * battery_storage ) # Solve bess = Model( m, name=\u0026#34;bess\u0026#34;, equations=m.getEquations(), problem=\u0026#34;MIP\u0026#34;, sense=Sense.MIN, objective=obj, ) bess.solve( solver=\u0026#34;CPLEX\u0026#34;, output=sys.stdout, options=Options(equation_listing_limit=1, relative_optimality_gap=0), ) if bess.solve_status not in [ SolveStatus.NormalCompletion, SolveStatus.TerminatedBySolver, ] or bess.status not in [ModelStatus.OptimalGlobal, ModelStatus.Integer]: print(\u0026#34;No solution exists for your input data.\\n\u0026#34;) raise Exception(\u0026#34;Infeasible.\u0026#34;) # Extract the output data # Power output power_output_header = Set( m, name=\u0026#34;power_output_header\u0026#34;, records=[\u0026#34;battery\u0026#34;, \u0026#34;external_grid\u0026#34;, \u0026#34;generators\u0026#34;, \u0026#34;load_demand\u0026#34;], ) report_output = Parameter( m, name=\u0026#34;report_output\u0026#34;, domain=[j, power_output_header], description=\u0026#34;Optimal combination of incoming power flows\u0026#34;, is_miro_output=True, ) report_output[j, \u0026#34;generators\u0026#34;] = Sum(i, gen_power.l[i, j]) report_output[j, \u0026#34;battery\u0026#34;] = battery_power.l[j] report_output[j, \u0026#34;external_grid\u0026#34;] = external_grid_power.l[j] report_output[j, \u0026#34;load_demand\u0026#34;] = load_demand[j] # Costs total_cost_gen = Parameter( m, \u0026#34;total_cost_gen\u0026#34;, is_miro_output=True, description=\u0026#34;Total cost of the generators\u0026#34;, ) total_cost_gen[...] = Sum( j, Sum(i, gen_cost_per_unit[i] * gen_power.l[i, j] + gen_fixed_cost[i]) ) total_cost_battery = Parameter( m, \u0026#34;total_cost_battery\u0026#34;, is_miro_output=True, description=\u0026#34;Total cost of the BESS\u0026#34;, ) total_cost_battery[...] = ( cost_bat_power * battery_delivery_rate.l + cost_bat_energy * battery_storage.l ) total_cost_extern = Parameter( m, \u0026#34;total_cost_extern\u0026#34;, is_miro_output=True, description=\u0026#34;Total cost for the imported power\u0026#34;, ) total_cost_extern[...] = Sum( j, cost_external_grid[j] * external_grid_power.l[j], ) total_cost = Parameter( m, \u0026#34;total_cost\u0026#34;, is_miro_output=True, description=\u0026#34;Total cost to fulfill the load demand\u0026#34;, ) total_cost[...] = total_cost_gen + total_cost_battery + total_cost_extern if __name__ == \u0026#34;__main__\u0026#34;: main() Basic Application - Rapid Prototyping Now that we have our first MIRO application, let\u0026rsquo;s explore the types of interaction we get right out of the box.\nInput At first the input parameters are empty. By clicking on Load data, we can load the default values defined by the records option in our GAMSPy code.\nIf our input parameters are correctly set up, we can modify them and then click Solve model to compute solutions for new input values.\nEven before solving, it can sometimes be useful to visualize the data to catch inconsistencies - such as negative load demand (which shouldn\u0026rsquo;t happen) or cost values that don\u0026rsquo;t align with expectations throughout the day. To view this data graphically, we can toggle the chart view in the top-right corner by clicking the icon. Here, we can filter, aggregate, and pivot the data. We can also use different chart types directly through the Pivot Table .\nIn our example, we pivoted the headers and selected line graphs. Because the dimensions of load_demand and cost_external_grid differ, it initially looks as though cost_external_grid is zero, even though it isn\u0026rsquo;t. To clarify this, we add a second y-axis with a different scale:\nSwitch the display type to Line Chart. Click the icon to add a new view. In the Second Axis tab, pick which series should use the additional y-axis. (Optional) Add a title and label for the axis. Save the view. Press the icon to enable Presentation Mode . You should end up with something like this:\nOutput When implementing the model, the output is often more interesting than the input, so let\u0026rsquo;s see what we can do here.\nMIRO separates scalar outputs into scalar parameters and scalar variables/equations:\nAs you can see, for scalar variables it contains not only the value of the scalar (level), but also marginal, lower, upper and scale. And since scalar parameters don\u0026rsquo;t have these attributes, they are treated separately.\nFor multi-dimensional output variables, we can again use the Pivot tool. For example, suppose we want to see how much power each generator is supplying at any given time. We can open the output variable containing the power values of the generators, pivot by generator, and filter by the `level\u0026rsquo; value. Next, we select the Stacked Bar Chart option, which gives us this view:\nWe can see that gen1 is the most expensive generator. It is used a bit at the beginning, then it is turned off after its minimum up time of four hours. And after another four hours it is turned on again, which also fulfills the minimum down time. As you can see, gen0 is the cheapest in both unit and fixed costs, so it is always at full power. All in all, we see that the minimum uptime and downtime constraints are met, and that each active generator stays within its power limits. If any of these constraints were violated, we would know exactly which part of the model to revisit.\nLet\u0026rsquo;s look at another example. Recall that we combined all power values with the given load demand into a single parameter so we could verify if the load demand is indeed met and how each source contributes at each hour. If we chose a Stacked Bar Chart, we can not easily compare the load demand with the sum of the power sources. Instead, we:\nSelect Stacked Bar Chart. Click the icon to add a new view. In the Combo Chart tab, specify that the load demand should be shown as a Line and excluded from the stacked bars. Save the view. The result should look like this:\nHere, we can immediately confirm that the load demand is always satisfied - except when the BESS is being charged, which is shown by the negative part of the blue bar. This is another good indication that our constraints are working correctly.\nWe can create similar visualizations for battery power or external grid power to ensure their constraints are also satisfied. By now, you should have a better grasp of the powerful pivot tool in MIRO and how to use it to check your model implementation on the fly.\nKey Takeaways Interactive Inputs and Outputs: Marking parameters as is_miro_input or is_miro_output enables dynamic fields for data input and real-time feedback, enhancing flexibility and debugging. Rapid Prototyping: Define output parameters based on variables to summarize important information such as cost. Then visually inspect the output to catch problems early! Data Validation and Error Reporting: Ensuring input consistency through log files and custom error messages (via MIRO syntax) helps catch errors early and improves user experience by highlighting inconsistencies directly in the input data sheets. Visual Validation: Pivot tables and charts in MIRO allow you to quickly verify your constraints. Logical Insights: For example, use stacked bar or line graphs to show whether demand is being met, or which generator combination is the cheapest. Now that we have our first MIRO application and a better understanding of our optimization model, in the next part we will look at the Configuration Mode, where you can customize your application without writing any code!\n","excerpt":"In the first part of our GAMS MIRO walkthrough, we will guide you through converting a GAMSPy model into a fully functional web application. We\u0026rsquo;ll also show you how to set up MIRO for rapid prototyping and introduce you to the powerful pivot tool.","ref":"/blog/2025/07/gams-miro-walkthrough-part-1/","title":"GAMS MIRO Walkthrough Part 1"},{"body":"GAMS participated in YAEM 2025 in Ankara, one of Turkey’s major operations research and industrial engineering conferences. Our team was on the ground to present talks, join a panel discussion, and meet with researchers, students, and professionals across academia and industry.\nWe contributed two technical presentations, “GAMS Engine SaaS: A Cloud-Based Solution for Large-Scale Optimization Problems” by Merve Demirci and “Embedding Trained Neural Networks in GAMSPy” by Hamdi Burak Usul. Additionally we provided a 45-minute expert panel. Together, these sessions drew strong interest from both academic and industry participants, leading to vibrant dialogue on Python integration, real-world applications, and the next generation of optimization tools.\nOur booth became a vibrant meeting point, drawing interest from students, professors, and industry representatives.\nVisitors were excited to learn about our free academic licenses, student project opportunities, and how GAMSPy bridges the gap between Python and GAMS. Interest was especially strong around Engine’s potential in university-industry collaborations and MIRO’s ability to bring optimization models to a wider, non-technical audience.\nWe’re excited about the connections made in Ankara and look forward to deepening our presence and collaborations in Turkey.\n\u0026times; Previous Next Close Embedding Trained Neural Networks in GAMSPy By Hamdi Burak Usul\nGAMSPy is a powerful mathematical optimization package that combines Python’s flexibility with GAMS’s modeling performance. GAMSPy enables previously challenging applications in the area of combining machine learning (ML) and mathematical modeling. To support these ML applications, our work introduces essential ML operations into GAMSPy, such as matrix multiplication, transposition, and norm calculations. Building on this foundation, we introduce GAMSPy \u0026ldquo;formulations\u0026rdquo;, a straightforward way to model common neural network constructs like linear (dense) layers, convolutional layers, and activation functions (ReLU, tanh and so on). When there are many good ways to formulate a construct, we implement many of them and to let user decide depending on their use case. In addition to neural network construct, we also introduce formulations for classical ML constructs such as regression trees and so on. In this talk, we demonstrate these enhancements by generating adversarial images for the German Traffic Sign Recognition Benchmark (GTSRB) using GAMSPy. We selected GTSRB because it requires a neural network that is significantly larger than many other toy examples neural networks like the ones trained for MNIST.\nGAMS Engine SaaS: A Cloud-Based Solution for Large-Scale Optimization Problems By Merve Demirci\nGAMS Engine SaaS is a cloud-based service that allows users to run GAMS jobs on a scalable and flexible infrastructure, currently provided by Amazon Web Services (AWS). It was launched in early 2022 and has since attracted a variety of customers who benefit from its features, such as horizontal auto-scaling, instance sizing, zero maintenance, and simplified license handling. GAMS Engine SaaS is especially suitable for workloads that require large amounts of compute power and can be adapted to many different scenarios. In this presentation, we show a case study of a large international consultant agency that uses GAMS Engine SaaS to run Monte-Carlo simulations of a large energy system model in response to varying climate change scenarios. We describe how they leverage the GAMS Engine API to submit and monitor their jobs, how they select the appropriate instance type for each job, and how they can use custom non-GAMS code on Engine SaaS. We also discuss the challenges and benefits of using GAMS Engine SaaS for this type of application, and provide some insights into the future development of the service.\n","excerpt":"From 25 to 27 of June we attended the YAEM 2025 in Ankara, one of Turkey’s major operations research and industrial engineering conferences. Our team was on the ground to present talks and to meet with researchers, students, and professionals across academia and industry.","ref":"/blog/2025/06/gams-at-the-yaem-2025-ankara/","title":"GAMS at the YAeM 2025 Ankara"},{"body":"The EURO 2025 conference brought us to Leeds this year, where the GAMS team once again had the opportunity to connect with the European Operations Research community. From Sunday to Wednesday, our team — Justine Broihan, Andre Schnabel, Muhammet Soytürk, and Frederik Proske — represented GAMS at our booth and through a series of presentations. As additional supporters to our team, our colleagues Stefan Vigerske and Stephen Mayer also went to Leeds to have two talks during the conference, contributing valuable insights from their areas.\nOur booth was buzzing — so much so that we ran out of flyers and merchandise by the end of day one.\nConnections, Conversations \u0026amp; Community The Sunday evening social event — conveniently hosted near our booth — was a great chance to mingle in a more relaxed setting. While day two brought fewer people than day one, the conversations were more in-depth and often very promising. As expected, day three saw a quieter crowd, but still valuable interactions.\nIt was surprising to notice how, in contrast to the 2024 edition, this year’s EURO conference featured significantly less participation from the enterprise sector. As a result, the event leaned much more heavily toward academic discussions, giving it a distinctly research-focused atmosphere.\nTalks, Recognition \u0026amp; Global Reach Each GAMS team member gave a talk during the conference, all of which were well received. In fact, GAMS even made its way into other presentations — highlighting the growing visibility of our tools within the OR community.\nEURO 2025 was another milestone for GAMS, confirming the growing traction of GAMSPy and our broader ecosystem within the academic and OR communities. A huge thank-you goes to Andre, Muhammet, Frederik, and Justine for their energy, dedication, and teamwork.\nWe’re already looking forward to the next one—see you at EURO 2026! 🌍\nSign up for our general information newsletter to stay up-to-date! \u0026times; Previous Next Close Our Abstracts From Chaos to Clarity: Consulting Lessons from Optimizing Alpro’s Energy and Production Scheduling By Justine Broihan\nEvery optimization model in production tells a deeper story—of translating theory into practice through close collaboration and iterative problem-solving. In this talk, we share the consulting journey behind designing and implementing a GAMS-based decision-support system at Alpro—a leader in plant-based food production. Far beyond just a technical deployment, this project was a masterclass in managing complexity, aligning stakeholders, and building trust.\nWe’ll explore the real-world challenges faced by our consulting team: translating on-site planner needs into mathematical logic, managing the messiness of live operational data, and navigating the cultural shift toward automation and dynamic market participation. Through a structured but adaptive process—iterative prototyping and continuous feedback loops—we helped Alpro optimize energy production and conditional bidding on the day-ahead electricity market.\nWhether you\u0026rsquo;re an academic researcher, operations leader, or consultant, this talk gives a candid look at what it takes to bridge the gap between sophisticated models and messy real-world implementation.\nEmbedding neural networks into optimization models with GAMSPy By Andre Schnabel\nGAMSPy is a powerful mathematical optimization package which integrates Python’s flexibility with GAMS’s modeling performance. Python features many widely used packages to specify, train, and use machine learning (ML) models like neural networks. GAMSPy bridges the gap between ML and conventional mathematical modeling by providing helper classes for many commonly used neural network layer formulations and activation functions. These allow a compact description of the network architecture that gets automatically reformulated into model expressions for the GAMSPy model.\nGAMS Engine SaaS: A Cloud-Based Solution for Large-Scale Optimization Problems By Frederik Proske\nGAMS Engine SaaS is a cloud-based service that allows users to run GAMS jobs on a scalable and flexible infrastructure, currently provided by Amazon Web Services (AWS). It was launched in early 2022 and has since attracted a variety of customers who benefit from its features, such as horizontal auto-scaling, instance sizing, zero maintenance, and simplified license handling. GAMS Engine SaaS is especially suitable for workloads that require large amounts of compute power and can be adapted to many different scenarios. In this presentation, we show a case study of a large international consultant agency that uses GAMS Engine SaaS to run Monte-Carlo simulations of a large energy system model in response to varying climate change scenarios. We describe how they leverage the GAMS Engine API to submit and monitor their jobs, how they select the appropriate instance type for each job, and how they can use custom non-GAMS code on Engine SaaS. We also discuss the challenges and benefits of using GAMS Engine SaaS for this type of application, and provide some insights into the future development of the service.\nGAMSPy - A Glue Between High Performance Optimization and Convenience By Muhammet Soytürk\nA typical optimization pipeline consists of many tasks such as mathematical modeling, data processing, and data visualization. While GAMS has been providing tools with great performance for mathematical modeling, Python and its giant ecosystem provide packages for data gathering, pre/post-processing of the data, the visualization of the data and developing necessary algorithms by utilizing existing ones. In this talk, we will talk about a “glue” package GAMSPy that aims to combine these two environments to leverage the best of both worlds.\nA parallelisation framework for solving challenging integrated long-haul and local vehicle routing problems By Stephen Mayer\nThe integrated long-haul and local vehicle routing problem with an adaptive transportation network is a very challenging optimisation problem. The adaptive nature of the transportation network means that the resulting optimisation problem is extremely large and difficult to solve directly using general purpose solvers. As such, the best approach for finding high quality solutions is to use heuristics combined with a branch-and-bound algorithm. Our research has developed a parallelisation framework that concurrently executes heuristic and exact approached to find high-quality solutions to the integrated long-haul and local vehicle routing problem. Within the parallelisation framework we have attempted to solve the complete problem directly using a MIP solver and by applying Benders’ decomposition. The results will show that the use of parallelisation and applying Benders’ decomposition increases the scale of problems that can solved and improves the upper and lower bounds that can be achieved.\nThe SCIP Optimization Suite 10 By Stefan Vigerske\nIn this year, the SCIP Optimization Suite reaches its first double-digit major version number. Starting with an algebraic modeling language, a simplex solver, and a constraint integer programming framework, containing the world’s best non-commercial mixed-integer programming solver, it has evolved over the last 20+ years into a swiss army knife for anything where relaxations are subdivided, trimmed, generated dynamically, and eventually solved, be it on embedded, ordinary, or super-computers. The newest iteration brings major updates for the presolving library PaPILO, the generic decomposition solver GCG, and the branch-cut-and-price framework SCIP itself. In this talk, we will give a short overview on the current SCIP Optimization Suite ecosystem and catch a glimpse on the new features contributed by over 15 developers in the newest major release.\n","excerpt":"This June we attended the EURO 2025 Conference in Leeds,UK to meet with colleagues, discuss new mathematical solutions to business problems, and share our latest advancements of GAMS, GAMSPy and more.","ref":"/blog/2025/06/gams-at-the-euro-2025-in-leeds-uk/","title":"GAMS at the EURO 2025 in Leeds UK"},{"body":"","excerpt":"","ref":"/authors/aschnabel/","title":"André Schnabel"},{"body":"Summary This down-to-the-detail article deals with the long-term effort at GAMS to translate major chunks of its historically grown codebase into modern C++. The General Algebraic Modeling System (GAMS), initially developed in Pascal and later Delphi (due to its academic popularity), transitioned to an in-house Pascal-to-C transpiler (p3c) to address performance and portability limitations of Delphi compilers. With the stagnation of p3c maintenance, GAMS undertook a migration to C++17 to leverage improved tooling, a larger pool of programmers, and enhanced performance. This article discusses GAMS\u0026rsquo;s journey from Delphi to C++17.\nThe shift introduces challenges like longer compilation times due to C++\u0026rsquo;s separate compilation model, differences in language features (e.g. nested functions, array indexing, variant parts in records, \u0026ldquo;with\u0026rdquo; statements), and the need of keeping tailored handwritten data structures over the generic one-size-fits-all C++ STL containers from the standard library.\nMajor programs like CMEX (CoMpilation and EXecution system - the program behind the \u0026ldquo;gams\u0026rdquo;-call) and the GDX utilities are being translated. The GDX library and utilities are already fully translated into C++17 and available on GitHub as open source software. The compiler part of the C++17 translation of CMEX is at feature parity with the Delphi version, but the execution system is still a work in progress. The C++ compiler preview can be activated via the CompilerPreview GAMS option for those feeling brave.\nHistorical background The General Algebraic Modeling System was first conceived in the early 1970s by a team working at the World Bank. A first notable presentation of the work in progress system was at the International Symposium on Mathematical Programming in Budapest in 1976 . A later publication in 1982 from Johannes Bisschop and Alexander Meeraus titled \u0026ldquo;On the development of a general algebraic modeling system in a strategic planning environment\u0026rdquo; already shows core concepts of the GAMS language like tabular data definition and the syntax inspired by algebraic notation with sets, parameters, equations, and variables. The first implementations of the GAMS modeling language were written in Fortran at the World Bank. During the late 1970s and early 1980s, the educational language Pascal from Niklaus Wirth reached high levels of popularity in the academic community and slowly began to overtake Fortran in terms of adoption. This explains why the main implementation of GAMS migrated over to Pascal and the vast majority of code for GAMS itself and supporting utility programs were formulated in Pascal or later in its object-oriented successor Delphi which is also known as Object Pascal. Additional information regarding the history of GAMS can be found in this blog post which commemorates the passing of David Kendrick, who was also a significant contributor to GAMS.\nThe p3c transpiler as intermediate step Since the proprietary and free Pascal and Delphi compilers from Borland and its buyer companies during the years (until it eventually ended up at Embarcadero) lacked the performance in the generated machine code and portability to various platforms in comparison with modern C compilers like GCC and later Clang, Søren S. Nielsen from the Technical University of Denmark developed a Pascal to C transpilation utility called p3c (which is independent of the GNU project p2c transpiler from Dave Gillespie ). While this allowed GAMS to continue maintaining its historically grown Pascal codebase but also benefit from the advances in C tooling, the transpiler was not meant to produce code for human consumption but instead focused on generating C code that the C compiler can swiftly transform into efficient machine code. Besides the work on the transpilation tool itself and also extending it for the more recently available object-oriented Delphi language features, Søren also wrote a partial re-implementation of the Pascal and Delphi standard library functions. While some calls can be directly translated into C standard library function invocations with a similar function name, some elementary functions like rounding behave fundamentally differently in C and Pascal. Even the compiler and base libraries (like math) can lead to subtle differences e.g., when computing logarithms. For example, the logarithmic function of the Microsoft and Intel C standard libraries return slightly different results for 0.15 as argument.\nThe transpiler is very impressive as it supports a big subset of the Pascal and Delphi language standard even up to Delphi versions from the 2000s. It also has a good runtime performance, helpful error messages, and produces C code that can be compiled into efficient machine code while still being somewhat readable.\nUnfortunately, the original author and main contributor of the transpiler, Søren S. Nielsen, unexpectedly passed away at a young age. Besides p3c, Søren was a talented researcher and co-authored the accompanying book for the library of \u0026ldquo;Practical Financial Optimization Models \u0026rdquo; which is included with GAMS . His co-authors credit him for the GAMS implementation of many \u0026ldquo;finlib\u0026rdquo; models. His loss deeply affected GAMS and as a result, development of p3c slowed in the years that followed, but received another spark of activity after veteran GAMS developer and current president Steven Dirkse did a major rework of the p3c internals in order to make exceptions thread-safe (which included a switch from C to C++ as target language). Through a combination of limited developer need, being dependent on robust support for legacy Delphi code (e.g. in the GAMS IDE), and the intent of keeping p3c lean, the introduction of post Delphi 7 language features like generics, closures, and foreach-loops never materialized.\nA less seamless debugging experience is another drawback of the two staged build approach that first transpiles the source Delphi into C before a C compiler finally produces runnable machine code. While most problems can be directly debugged using an interactive Delphi debugger like the Lazarus IDE, some more recent features like multithreading and socket communication are not fully implemented in raw Delphi and are only supported via inline C and C++ code when using the transpiler. These features require debugging the intermediate C code in a debugger like gdb or lldb.\nFuture-proofing the codebase via manual translation into modern C++ Why C++17? Delphi losing popularity after the early 2000s, and thus not being able to hire Delphi experienced programmers easily, together with the transpiler getting less and less attention, increases the pressure for GAMS to finally migrate its full codebase from Delphi to C++. C++ was chosen as it is a popular systems programming language that offers modern abstractions with a limited performance penalty in comparison to other more modern languages. Its compatibility with C also seems very beneficial, as GAMS already has libraries written in C and the tools previously written in Delphi often communicate with dynamic libraries via a C API. The language version C++17 seems sensible, due to it being fully supported by all major compilers (MSVC, GCC, Clang, AppleClang, and Intel\u0026rsquo;s C++ compiler). Compared to raw C, C++ offers additional abstractions that improve maintainability like object orientation, lambdas, templates, and a more extensive standard library.\nIn comparison to Delphi, C++ is also a step forward in terms of available tooling. There is a bigger number of good integrated development environments for C++ (good examples being Visual Studio, VSCode, CLion, QtCreator) plus many static analyzers, compilers, debuggers, and profilers. C++ compilers can target many platforms and operating systems. In addition, there are more third-party libraries available for C++ and the language itself evolved more rapidly than Delphi in recent years.\nPotential alternative choices for the new target language could\u0026rsquo;ve been Rust, Nim, Zig, or Go but they either lack in terms of seamless C interoperability (Rust), raw performance (Go), availability of tooling (Nim, Zig), or likelihood of being still relevant in 20-30 years in the future as they are quite young (see Lindy effect ). Hence using C++ looks like a safe bet despite its shortcomings due to backwards compatibility to itself (C++98) and its C roots.\nCompilation speed regression Speaking of C++ suffering from backwards compatibility: One major \u0026ldquo;downgrade\u0026rdquo; so to speak when translating software from a Pascal derived language to a C derived language is the time required for a clean rebuild. Unlike Delphi, where compilation units include both interface (e.g. function signatures) and implementation (e.g. actual function code) together, the C++ programming language shares the historically emerged separate compilation model from C. The separation of interface and implementation into different files with the preprocessor-based inclusion directives leads to significantly longer build times in comparison to the Pascal/Delphi compilation model with self-contained units and smarter \u0026ldquo;uses\u0026rdquo; dependency tracking. C and C++ compilers often must process header files many times when building a program, as the preprocessor operations like macros and defines can affect their precise contents based on the order of inclusion. The long build times in C and C++ can be slightly improved by using forward declarations where possible, reducing unnecessary inclusions (e.g. with the help of tools like iwyu ), and using caching mechanisms like precompiled headers and ccache (or sccache ). The first two require extra manual effort whereas the caching is limited in its effectiveness by cache misses due to modified compiler arguments and code in header files that depend on the setting of preprocessor defines that can differ in different parts of a project. While C++20 introduced modules as a solution and GAMS internal first experiments were already promising in 2021, currently in 2025 the compilers used for building the GAMS distributions (MSVC, Clang, GCC, Intel C++) do not offer a portable and full implementation of the C++20 modules standard just yet, hence GAMS is forced to still use the somewhat outdated C++17 standard with separate header and compilation units.\nDelphi and C++ feature sets differ While the main programming paradigms followed in both C and Pascal and descendants are matching, the designs of both languages follow different philosophies in the details. Additionally, during their evolution, Pascal and its successor Delphi received features that do not have a direct counterpart in C or C++. For example, unlike C, Pascal allows array indices to be more flexible with custom bounds, non-contiguous indices, and it even defaults to 1-based indexing (where C and the many influenced languages start counting at 0 inspired by address offsets).\nPascal also offers subrange and set types that are not available in C, but this can be imitated (mostly) with templated utility class Bounded\u0026lt;T,lb,ub\u0026gt;. Similarly, C++ doesn\u0026rsquo;t have the object properties from Delphi, but those can be simply replaced with getters and setters methods. The switch from Delphi to C++ allowed using smart pointers for secure memory management (instead of manual Type.Create and Type.Destroy calls widespread in Delphi object-oriented code). Instead of nested functions and procedures, C++17 has more flexible lambda expressions that can mimic nested functions but are also more powerful.\nFixed-size strings (character buffers) like the ShortString often used in Delphi have performance advantages over more flexible variable length strings like std::string in C++ and can be easily replicated in C++ via a custom class built on top of std::array\u0026lt;char, 256\u0026gt; with convenient methods for conversion into std::string and C-style pointers to null terminated character buffers.\nAnother language feature that requires heightened attention when translating is variant parts in records in Delphi. In C++ they can be approximated by having a struct with union fields plus a variable storing which alternative part of the union is active. Even with this approximation, the syntax of accessing fields of the struct varies slightly from the corresponding Delphi code.\nAnother miniscule but very important difference between C/C++ and Pascal/Delphi is the default zero initialization of variables and fields. In the majority of cases, they behave exactly identical, but for example when constructing an object of a class or struct on heap, its fields with built-in types (like e.g. int) are not zero initialized automatically in C++ but in Delphi they are.\nTailored data structures are more efficient than generic ones A major performance observation after migrating to C++17 was the inadequacy of C++ STL containers as replacement for handwritten dynamic array and hashmap data structures from the GAMS codebase. Using a RB-tree based std::map and std::vector as naive counterparts significantly slowed down execution. This can be partly explained due to C++ standard library containers being more general and flexible than GAMS\u0026rsquo;s handwritten data structures that for example only allow growing (element insertion) but not shrinking (deletion). Doing a deep translation of these custom collection classes closed the initial performance gap caused by adoption of std::* containers as simpler replacements.\n\u0026ldquo;with\u0026rdquo;-statement: resolution can be tricky Amongst the Delphi features not found in C++, the \u0026ldquo;with\u0026rdquo;-statement is a frequent source of headaches in the translation process. Essentially, it is meant to reduce repetition when having many instructions affecting one object, e.g. method calls or reading and writing object fields and properties. So, for example \u0026ldquo;o.x(); o.a := 2;\u0026rdquo; can be shortened to \u0026ldquo;with o begin x; a := 2; end\u0026rdquo;. In short code snippets this seems like a wise way to reduce redundancy and typing effort. But, when this feature is used inflationary in a nested way in huge functions, it can get difficult to determine which object a certain line of code refers to. While the language specification clearly defines the last object having a \u0026ldquo;with binding\u0026rdquo; to take precedence, it can still involve lots of scrolling in the codebase to identify the relevant object which has the field or method that is being used somewhere later in the code. Hence a more pragmatic way to resolve the with statement is looking at it in the Lazarus IDE (a graphical IDE for FreePascal) debugger, which displays the referred object as a tooltip over the line. Yet, the \u0026ldquo;with\u0026rdquo;-statement is widely known to be problematic and is nowadays only used with care by modern Delphi programmers.\nFine grained memory management Another fine detail where Delphi and C++ differ is in memory alignment, packing, and sizing minutiae. One example is the default width of variables with enum type, which in turn can affect the memory layout of structs and introduce unintended padding that causes increased memory usage. Just naively declaring an enum causes variable instances of it to have the full width of an int (4 bytes on x64). The workaround is to explicitly declare it as \u0026ldquo;enum name : uint8_t { … }\u0026rdquo; to force it being stored in a single byte, which is possible for enumerations with values in the range of 0..255.\nWhile structs/records and class objects must be stored on the heap in Delphi, they can be placed on the stack sometimes in C++. In cases where that is not feasible e.g. due to late initialization, smart pointers like std::unique_ptr\u0026lt;T\u0026gt; can be used to at least make their deconstructor-calls and freeing automatic. So, C++ allows more data to be stored on the \u0026ldquo;faster\u0026rdquo; stack instead of being forced to use the heap like in Delphi.\nCurrent status and outlook Major GAMS programs that were translated or are in the process of translation are the Compiler and Execution System (CMEX) and the library and utilities for GAMS Data eXchange files (GDX). The utilities are most notably gdxdump (print GDX contents), gdxmerge (merge two GDX files), and gdxdiff (compare two GDX files). While the GAMS compiler in C++17 (internally called CPPMEX) is pretty much at feature (and performance) parity with its original Delphi implementation, the execution system is still a work in progress and only works for two dozen of simple GAMS Model Library models yet. But work progresses swiftly with the full GAMS CMEX being translated into C++ soon. If you want to experiment with the C++ translation of the compiler, you can manually opt into activating it by using the CompilerPreview GAMS option.\n","excerpt":"This slightly nerdy article explores GAMS’s transition from Delphi to modern C++17, detailing the shift from an in-house Pascal-to-C transpiler (p3c) to C++ for better performance, portability, tooling, and access to a broader developer community.","ref":"/blog/2025/06/gams-under-the-hood-the-delphi-to-c-transition/","title":"GAMS under the Hood - The Delphi to C++ transition"},{"body":"","excerpt":"","ref":"/authors/mhorstmann/","title":"Michael Horstmann"},{"body":"","excerpt":"","ref":"/categories/under-the-hood/","title":"Under the Hood"},{"body":"","excerpt":"To support their carbon neutrality ambitions, the global energy company TotalEnergies developed a complex optimization model for Carbon Capture and Storage (CCS). This highly sophisticated model, capable of managing the complex CO2 logistics, struggled with slow runtimes and complex code that limited its practical use. GAMS Consulting stepped in to streamline the model, reducing simulation runtimes from hours to minutes and enabling long-term scenario analysis. Also, we developed a user-friendly Python and Excel interface that allowed engineers to run and analyze the model without the need for deep programming expertise. This collaboration has transformed the model into a robust decision-support tool, helping TotalEnergies scale up CCS projects for a net-zero future.","ref":"/consulting/totalenergies/","title":"Accelerating Carbon Storage Optimization"},{"body":" *TotalEnergies is a registered trademark of Group TotalEnergies and it is reproduced with its owner's authorization. Area: Energy\nProblem class: MINLP Technologies: GAMS\nAccelerating Carbon Storage Optimization with TotalEnergies Introduction TotalEnergies is a global integrated energy company that produces and markets energies: oil and biofuels, natural gas, biogas and low-carbon hydrogen, renewables and electricity. Its more than 100,000 employees are committed to provide as many people as possible with energy that is more reliable, more affordable and more sustainable. Active in about 120 countries, TotalEnergies places sustainability at the heart of its strategy, its projects and its operations.\nAs part of its ambition of carbon neutrality by 2050, together with society, TotalEnergies develops Carbon Capture Storage (CCS) solutions –capturing CO2 emissions from industrial sites, transporting them, and permanently storing them beneath the seabed\nTo make this work technically and economically feasible, TotalEnergies developed a sophisticated GAMS optimization model to manage the complex operations of CO2 logistics: buffering in tanks, transport, and injection into subsurface reservoirs. This large-scale MINLP model was built upon the work and in collaboration with Professor Grossmann and several students from Carnegie Mellon University. However, taking the academic model and turning it into a practical industrial model presented significant challenges, particularly due its long-term planning horizon and detailed daily operational decisions.\nGAMS Consulting addressed the model’s bottlenecks and made several breakthroughs. Through targeted improvements GAMS helped TotalEnergies achieve a performance gain that took the model from running a field development strategy life of 1-2 years in about an hour, to 20 years in a few minutes. Additionally, we developed a user-friendly Python interface that technical staff without expert domain knowledge can easily use and interact with.\nThe Problem The CCS optimization model developed by TotalEnergies was designed to operate through an important part of the carbon value chain –from CO2 arrival at port terminals to final injection in off-shore reservoirs. However, what made the model so powerful also made it impractical to use. Capturing the operational nuances of well pressure behavior, injection dynamics, and tank buffering across multiple time scales meant incorporating highly non-linear constraints and a large number of binary decisions. The result was a model with tens of thousands of variables and constraints that grew exponentially with each additional month of planning.\nThis complexity was exacerbated by the need to make decisions at daily intervals, while planning over a horizon of years or decades. Basically, the long-term vision collided with short-term computational feasibility: the model took too long to solve, especially when simulating realistic scenarios.\nThese delays made carrying out studies time-consuming and cumbersome. It was simply inefficient to run tests that took over an hour each. Additionally, the highly technical structure of the model made it difficult for non-specialists to interact with, which limited the accessibility and practical value for the engineers who needed it most.\nFigure 1. Superstructure of the CCS problem (1) GAMS Consulting’s Solution Our team approached this problem with a strategy that blended technical expertise with a deep domain understanding and developed a solution in multiple steps. Clearing the Path: Tackling the Model’s Deepest Bottlenecks As a first step, we conducted a thorough audit of the existing model to identify computational and structural bottlenecks. Then, the team refactored the model’s code using modern GAMS features to improve maintainability and streamline logic; constraints and time structures were reformulated by leveraging problem-specific knowledge, reducing the overall model’s complexity; and decomposition algorithms used to solve the model were improved, resulting in a significantly more efficient optimization process.\nThese efforts culminated in a dramatic performance boost. For a 25-year horizon model, the runtime went from hours to a few minutes. This improvement in runtime was most noticeable during the solve phase, which made it feasible to run detailed, realistic scenarios in minutes. Plus, this meant that the model could be improved even further in complexity –for example, allowing for uncertainties, which just wasn’t a possibility before. Bridging the Gap: From Complex Code to Practical Application Recognizing that the model would be used primarily by reservoir engineers –professionals more focused on operational insight than programming– our team redesigned the user interaction experience. We developed a Python-based interface that allows users to configure model runs, manage input and output data, and view results in a structured and intuitive way. Then, in order to bridge the gap between advanced optimization and user-friendly operation, this interface was paired with an Excel-compatible frontend. This allowed engineers to interact with the model using familiar tools, removing the need to understand GAMS syntax or Python code, and thus enabling a more seamless workflow between the engineers and the optimization platform.\nTo support long-term sustainability and encourage adoption, we also prioritized training and knowledge transfer. The consulting team delivered detailed documentation covering the model’s architecture, usage of advanced GAMS features, and run configuration. We also produced a simplified user manual with examples, operational guidelines, and troubleshooting tips, ensuring that TotalEnergies’ employees could independently maintain and adapt the model for future needs.\nConclusion This partnership showcases the real-world impact of combining advanced optimization with expert consulting. TotalEnergies was faced with a highly complex CCS model that was difficult to maintain and underperformed in terms of computational time, and through targeted model refactoring and algorithmic improvements, the GAMS team reduced runtimes significantly –enabling realistic scenario planning and faster decision-making. This speedup also allows TotalEnergies to add even more complexity to their model, relaxing the existing assumptions and getting the model as close to reality as possible.\nEqually important was the redesign of the user interface, which made the model accessible to engineers without deep programming expertise. By integrating a Python backend with an Excel-compatible frontend, GAMS ensured that the tool could be used directly by those closest to the operational decisions.\nThe result was the transformation of a techincally powerful but cumbersome model, into a streamlined, strategic tool. Our team not only accelerated the computations, but also gave the engineers at TotalEnergies a solution fast enough to use day-to-day, changing how they approach optimization to support their CCS operations.\nToday, the model works as a robust, user-friendly decision-support system that plays a key role in advancing TotalEnergies\u0026rsquo; net-zero strategy. This collaboration is a clear example of how the right expertise can turn complexity into capability, and vision into action.\nReferences 1) Mixed-Integer Nonlinear Programming Model for Optimal Field Management for Carbon Capture and Storage Ambrish Abhijnan, Kathan Desai, Jiaqi Wang, Alejandro Rodríguez-Martínez, Nouha Dkhili, Raymond Jellema, and Ignacio E. Grossmann\nIndustrial \u0026amp; Engineering Chemistry Research 2024 63 (27), 12053-12063\nDOI: 10.1021/acs.iecr.4c00390 ","excerpt":"To support their carbon neutrality ambitions, the global energy company TotalEnergies developed a complex optimization model for Carbon Capture and Storage (CCS). This highly sophisticated model, capable of managing the complex CO2 logistics, struggled with slow runtimes and complex code that limited its practical use. GAMS Consulting stepped in to streamline the model, reducing simulation runtimes from hours to minutes and enabling long-term scenario analysis. Also, we developed a user-friendly Python and Excel interface that allowed engineers to run and analyze the model without the need for deep programming expertise. This collaboration has transformed the model into a robust decision-support tool, helping TotalEnergies scale up CCS projects for a net-zero future.","ref":"/stories/totalenergies/","title":"Accelerating Carbon Storage Optimization with TotalEnergies"},{"body":"","excerpt":"","ref":"/campaign/","title":"Campaigns"},{"body":"CyBio Scheduler CyBio, merged into Analytik Jena AG in 2009, and the Max Planck Institute Magdeburg developed optimization methods involving GAMS to increase the throughput of robotic screening systems.\nCase Study – CyBio Scheduler Optimizing Carbon Capture Technologies Case Study – Optimizing Carbon Capture Technologies 50 Hertz Case Study - Optimizing Power Trading Auctions United States Military Academy Case Study - Scheduling at the United States Military Academy ","excerpt":"\u003ch2 id=\"cybio-scheduler\"\u003eCyBio Scheduler\u003c/h2\u003e\n\u003cp\u003eCyBio, merged into Analytik Jena AG in 2009, and the Max Planck Institute Magdeburg developed optimization methods involving GAMS to increase the throughput of robotic screening systems.\u003cbr\u003e\n\u003ca href=\"/stories/analytikjena/\"\u003eCase Study – CyBio Scheduler\u003c/a\u003e\n\u003c/p\u003e\n\u003ch2 id=\"optimizing-carbon-capture-technologies\"\u003eOptimizing Carbon Capture Technologies\u003c/h2\u003e\n\u003cp\u003e\u003ca href=\"/stories/doe/\"\u003eCase Study – Optimizing Carbon Capture Technologies\u003c/a\u003e\n\u003c/p\u003e\n\u003ch2 id=\"50-hertz\"\u003e50 Hertz\u003c/h2\u003e\n\u003cp\u003e\u003ca href=\"/stories/50hertz/\"\u003eCase Study - Optimizing Power Trading Auctions\u003c/a\u003e\n\u003c/p\u003e","ref":"/stories/","title":"STORIES"},{"body":" GAMS at the YAEM 2025 GAMS is at the 44th Operations Research and Industrial Engineering Congress in Ankara, Turkey — June 25–27, 2025. Discover our latest innovations, join exclusive sessions, and explore how we optimize real-world decisions. See our program About YAEM 2025 The main theme of this year’s congress has been determined as “YA/EM in the Age of Artificial Intelligence” . Our congress aims to examine the transformative impact of Artificial Intelligence, one of the most striking technological developments of our age. The revolutionary innovations offered by Artificial Intelligence technologies in a wide range from the improvement of industrial processes to logistics and supply chain management, from production/service planning to resource allocation will be addressed with multidisciplinary approaches.\nGAMS at the Conference Find us at our booth and during several our presentations throughout the conference. Talk to our experts, get hands-on demos, and learn how we help companies and institutions solve their toughest problems. We are there for you. Where to find us: 📍 Exhibition Area – Booth 3 📅 June 25–27, 2025 👨‍💼 Visit our booth and meet our experts. Our experts at site: Burak Usul\nMerve Demirci\nTalks \u0026 Presentations Join our interactive sessions and discover real-world applications of mathematical optimization.\nEmbedding Trained Neural Networks in GAMSPy Presented by: Burak Usul\nGAMSPy is a powerful mathematical optimization package that combines Python’s flexibility with GAMS’s modeling performance. GAMSPy enables previously challenging applications in the area of combining machine learning (ML) and mathematical modeling. To support these ML applications, our work introduces essential ML operations into GAMSPy, such as matrix multiplication, transposition, and norm calculations. Building on this foundation, we introduce GAMSPy \"formulations\", a straightforward way to model common neural network constructs like linear (dense) layers, convolutional layers, and activation functions (ReLU, tanh and so on). When there are many good ways to formulate a construct, we implement many of them and to let user decide depending on their use case. In addition to neural network construct, we also introduce formulations for classical ML constructs such as regression trees and so on. In this talk, we demonstrate these enhancements by generating adversarial images for the German Traffic Sign Recognition Benchmark (GTSRB) using GAMSPy. We selected GTSRB because it requires a neural network that is significantly larger than many other toy examples neural networks like the ones trained for MNIST. Recognizing the challenges of embedding large-scale neural networks, we also propose a method for incorporating large models via black-box embeddings. We highlight our new black-box formulation and its complementary formulations, emphasizing their potential in ML research and development.\nGAMS Engine SaaS: A Cloud-Based Solution for Large-Scale Optimization Problems Presented by: Merve Demirci\nGAMS Engine SaaS is a cloud-based service that allows users to run GAMS jobs on a scalable and flexible infrastructure, currently provided by Amazon Web Services (AWS). It was launched in early 2022 and has since attracted a variety of customers who benefit from its features, such as horizontal auto-scaling, instance sizing, zero maintenance, and simplified license handling. GAMS Engine SaaS is especially suitable for workloads that require large amounts of compute power and can be adapted to many different scenarios. In this presentation, we show a case study of a large international consultant agency that uses GAMS Engine SaaS to run Monte-Carlo simulations of a large energy system model in response to varying climate change scenarios. We describe how they leverage the GAMS Engine API to submit and monitor their jobs, how they select the appropriate instance type for each job, and how they can use custom non-GAMS code on Engine SaaS. We also discuss the challenges and benefits of using GAMS Engine SaaS for this type of application, and provide some insights into the future development of the service.\nPrevious Next Getting Started with GAMSPy Read the Installation and Licensing instructions in our documentation. Check out the Quickstart Guide to learn basic concepts. The GAMSPy open-source GitHub repository can be accessed here. Please use the GAMSPy section of our forum for questions and support. Leave Some Feedback and Receive a Free GAMS License We'd love to hear your thoughts about YAEM 2025! Fill out this short form and receive an unrestricted GAMS license for two months.\nLoading… ","excerpt":"\u003cstyle\u003e\n .jumbotron-campaign {\n position: relative;\n background-image: url('ankara.jpg'); /* Exchange picture for campaign */\n background-size: cover;\n background-position: center;\n height: 400px;\n z-index: 1;\n overflow: hidden;\n }\n \n .jumbotron-campaign .overlay {\n content: \"\";\n background: rgba(0, 0, 0, 0.4); /* Dunkles Overlay */\n position: absolute;\n top: 0;\n left: 0;\n height: 100%;\n width: 100%;\n z-index: 1;\n }\n \n .jumbotron-campaign .container {\n position: relative;\n z-index: 2;\n }\n \n .carousel .card {\n min-height: 450px; /* oder 500px, je nach Inhalt */\n }\n .carousel-item {\n display: flex;\n align-items: center;\n justify-content: center;\n padding-left: 100px;\n padding-right: 100px;\n }\n\n .carousel-control-prev,\n .carousel-control-next {\n background: transparent !important;\n border: none;\n }\n\n .carousel-control-prev-icon,\n .carousel-control-next-icon {\n filter: brightness(0) saturate(100%) invert(47%) sepia(95%) saturate(480%) hue-rotate(359deg) brightness(98%) contrast(101%);\n /* Das ergibt ein GAMS-oranges Icon */\n width: 30px;\n height: 30px;\n }\n \n\u003c/style\u003e\n\n\n\n\u003c!-- Website Content and Information --\u003e\n\n\u003csection\u003e\n \u003cdiv class=\"full-width\"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid jumbotron-campaign position-relative\"\u003e\n \u003cdiv class=\"overlay position-absolute w-100 h-100\"\u003e\u003c/div\u003e\n \n \u003cdiv class=\"container position-relative text-white\"\u003e\n \u003ch1 class=\"display-1 mt-3 text-white\"\u003eGAMS at the YAEM 2025\u003c/h1\u003e\n \u003cp class=\"lead\"\u003e\n GAMS is at the 44th Operations Research and Industrial Engineering Congress in Ankara, Turkey — June 25–27, 2025. \n \u003cbr\u003e\n Discover our latest innovations, join exclusive sessions, and explore how we optimize real-world decisions. \n \u003cbr\u003e \n \u003c/p\u003e","ref":"/campaign/yaem_2025/","title":"YAEM 2025 in Ankara"},{"body":"","excerpt":"Alpro, a leading producer of plant-based food and beverages, faced the challenge of optimizing the complex energy management system at one of its production plants. Our consulting team at GAMS developed two custom models, along with a web-based graphical user interface (GUI), to streamline their daily operations—from data gathering to energy trading and consumption. As a result, process times were reduced by 75%, and costs were cut by 25%, helping Alpro achieve both its financial and environmental goals.","ref":"/consulting/apg/","title":"Optimizing Austria's Energy Flow"},{"body":" Area: Energy\nProblem class: LP\nTechnologies: GAMS\nAPG: Optimizing Austria’s Energy Flow Introduction Austrian Power Grid (APG) has the statutory and therefore social mandate to provide Austria with a secure supply of electricity and is thus responsible for the country\u0026rsquo;s security of supply. With its highly qualified staff, APG has been ensuring a secure electricity supply across the country, from Lake Neusiedl to Lake Constance, for decades. The stable electricity supply in Austria is based on a strong, high-capacity transmission grid, a mix of power plants comprising many wind, hydro, and pumped storage plants, additional reserve power plants, and APG’s intensive, ongoing coordination with national grid operators and European transmission system operators.\nAs the national Transmission System Operator, APG is dedicated to ensuring a reliable and sustainable energy supply by strategically planning and developing its grid assets. This involves coordinating both the refurbishment of existing infrastructure and the construction of new assets in close collaboration with national initiatives and international processes such as the Ten-Year Network Development Plan (TYNDP). Through this coordinated approach, APG aims to integrate advanced technologies and innovative grid solutions, promote energy market harmonization, and support the broader objectives of the European energy transition while maintaining system resilience and operational excellence.\nTo achieve these objectives, APG has developed an advanced GAMS model that optimizes energy flow—including distribution, storage, and trade—over the span of an entire year. Based on extensive data sets, the model is structured as a large-scale linear program that is decomposed into smaller, weekly sub-models using a rolling-horizon approach. This dynamic methodology enables APG to manage evolving scenarios by breaking the problem into smaller, overlapping intervals and solving them iteratively.\nDespite these advancements, challenges remained. Model generation and solver times exceeded desired limits, and some instances resulted in infeasibility, preventing the establishment of a consistent annual solution. Consequently, APG’s System Development Unit faced time-consuming efforts to identify and resolve these inconsistencies, leading to missed opportunities for energy optimization.\nEnhancing the model with GAMS Consulting Our team of experts collaborated with APG to conduct a comprehensive review and refactoring of their market optimization pipeline. Focusing on improving model generation time, solver efficiency, and stability, the results were truly impressive. Following our consulting project, APG achieved significant advances in every metric, and a substantial impact in both the efficiency and reliability of their model.\nPrior to partnering with GAMS, several model instances were infeasible, which limited APG’s ability to solve all scenarios to optimality. A thorough analysis revealed that numerical issues—specifically, inconsistencies in the rounding of input data—prevented the solver from finding optimal solutions within the default tolerances. Once these issues were identified, our GAMS experts achieved complete stability across all model instances, ensuring every scenario can now be solved optimally.\nOur team also realized a significant reduction in total model-solving time; some instances are now solvable in just 225 seconds, compared to over 900 seconds previously. These improvements stemmed from two key areas: model generation and solver time. Model generation times were reduced by up to 30%, while solver times decreased by as much as 70%.\nFigure 1. Comparison of each model instance’s total time, before and after GAMS’ consulting work. Figure 2. Comparison of mean total time, model generation time, and solver time, before and after GAMS’ consulting work. Lastly, the refactored model demonstrated considerably more efficient memory usage, with a significantly reduced high-water mark, i.e. peak memory requirement. Memory consumption was reduced by approximately 80%, from over 42 GB to under 9 GB. This enhancement not only boosted overall performance but also enabled the system to handle large-scale problems more reliably, particularly on shared computer resources.\nConclusion Our collaboration with APG is a data-driven success story that underscores the powerful impact of GAMS\u0026rsquo; targeted consulting services on transforming complex, large-scale optimization models. The partnership has led to substantial improvements in system performance and resilience, clearly demonstrating the considerable benefits of applying expert optimization strategies to modern energy challenges.\n“Bringing in GAMS’ team was the right decision”, said Valentin Wiedner, Teamlead of System Modelling at APG. “With their expert knowledge, they helped us bring our state-of-the-art model up to the best industry standards. The before and after numbers showed an impressive difference. Thanks to their work, we now have a smooth pipeline, and a reliable and fast-performing model. Their proactive approach and commitment significantly contributed to the timely and high-quality completion of our project. We are highly pleased with the results and greatly appreciate the professional and collaborative partnership.”\nThroughout the project, GAMS provided deep technical insight and industry-leading expertise to address key performance barriers. By meticulously refining model generation processes and optimizing solver efficiency, GAMS ensured that the optimization pipeline not only met but exceeded top industry standards. This technical enhancement has resulted in a streamlined pipeline and a system that consistently delivers rapid, optimal solutions. The upgraded model now solves every instance within shorter time frames and without the computational inefficiencies that once hampered performance.\nFurthermore, this initiative has enhanced APG’s operational readiness, equipping them with a robust foundation to execute their regular system development processes and strategic planning with absolute confidence. The collaboration has not only improved immediate performance metrics but has also contributed to building a more resilient and future-proof system, positioning APG at the forefront of energy optimization and innovation.\n","excerpt":"The Austrian Power Grid (APG) partnered with our consulting team to optimize its electricity transmission model. As a result, model-solving time was slashed by over 60%, memory usage dropped by 80%, and various infeasibilities were resolved—ensuring APG’s model is now more stable, efficient, and fully capable of executing their energy strategy with enhanced reliability and optimal performance.","ref":"/stories/apg/","title":"APG: Optimizing Austria’s Energy Flow"},{"body":"","excerpt":"","ref":"/authors/mgallia/","title":"Mateo Gallia"},{"body":" Introduction Imagine trying to make the most of a single day at Disneyland Magic Kingdom. With 30+ attractions, long lines, walk times between rides, and the unpredictable closure of rides, it’s a classic example of a high-stakes optimization problem. This is exactly the challenge our 2024/2025 GAMSPy Student Competition winner, Yiheng Su, tackled. Combining real-world data, mathematical modeling, and the power of Python optimization with GAMSPy, he delivered a solution that perfectly balances fun with efficiency.\nThis year we launched our first-ever GAMSPy student competition, a contest where undergraduate and graduate students from all over the world put their modeling skills to work and test themselves against each other. Yiheng Su, a first-year PhD student in Computer Science from the University of Wisconsin-Madison earned the top prize.\nYiheng Su, winner of the 2024/2025 GAMSPy Student Competition Yiheng first found out about GAMSPy in an optimization class taught by Professor Michael Ferris. “We were required to learn and use it,” he says, “and while programming, I found that GAMSPy was very easy to use, especially for setting up variables, constraints, objectives, and using different solvers.” This intuitive first experience quickly gave way to inspiration. Shortly after, Su was planning a trip to Disneyland Magic Kingdom with his girlfriend, an idea that soon turned into a full-fledged optimization challenge. Making the best out of GAMSPy capabilities, he built a smart scheduling system that determines the most rewarding route through the park. Paired with smart data processing and useful visualizations, the project is the perfect example of what you can do by tackling the full pipeline of an optimization problem with GAMSPy.\nThe Model, In a Nutshell To build a tool capable of planning the “perfect day” at Disneyland, Yiheng started by collecting real-world data on attractions, walking distances, ride durations, and average wait times. Using Python libraries like pandas, numpy, and math, he processed and cleaned the data, transforming scattered online information into structured datasets that could be fed into a model. The preprocessing also involved mapping coordinates and calculating shortest walking paths between attractions in order to capture the real physical layout of the park.\nThen, at the core of his optimization model, Su used a mixed-integer programming formulation implemented in GAMSPy. The objective was to maximize the total ride value (a proxy for visitor satisfaction), while subjecting it to realistic constraints like park hours, total walking and waiting time, and ride-specific limitations. Binary decision variables determined whether or not a ride was visited, and sequencing constraints ensured feasible transitions from one attraction to another. Plus, Su also factored in potential ride closures, adding robustness to the final itinerary.\nRoute visualization of the optimized Disneyland itinerary. To make the outputs digestible and engaging, Yiheng leaned on matplotlib, folium and googlemaps to generate visual \u0026ldquo;summaries\u0026rdquo;. These include detailed schedules breaking down time spent walking, waiting, or enjoying attractions, and route maps through the park clearly visualized in Google Maps. The result is a planning tool that doesn\u0026rsquo;t just compute an optimal solution, it communicates it clearly and interactively.\nView the full project on GitHub Conclusion Yiheng Su’s project is more than just a clever use of GAMSPy, it’s a brilliant example of how mathematical modeling can transform even everyday experiences. With real-world data, thoughtful assumptions, and rigorous optimization logic, this project not only showcased the versatility of GAMSPy and Python, it also delivered a solution that’s both technically robust and very imaginative.\nMore than that, what makes this project shine is its completeness: from data crawling and cleaning, through visualization and distance modeling, all the way to implementing a smart solution using mixed-integer programming. Every step reflects a full-stack approach to decision-making under constraints, something we encounter in fields as varied as supply chain logistics, healthcare planning, and energy systems.\nThis is exactly the kind of innovation we hoped to inspire with our first GAMSPy student competition. Congratulations again to Yiheng, and a big shoutout to our talented runners-ups, Allicia Moeller and Ahmad Heidari, for raising the bar so high. We’re excited to see how future participants will build on this year’s momentum and continue to push the boundaries of optimization with GAMS and GAMSPy.\n","excerpt":"We are happy to present the winning project of our 2024/2025 GAMSPy Student Competition. Yiheng Su, a PhD student from the University of Wisconsin-Madison, developed an optimization model to plan the most efficient and enjoyable day at Disneyland Magic Kingdom. Inspired by a personal trip, Su applied mixed-integer programming, data preprocessing, and visualization to create a smart route planner that balances time, ride popularity, and unpredictability. This project is the perfect example of a full-stack approach to modeling and decision-making under constraints, and showcases the practical power and accessibility of GAMSPy for solving real-life challenges.","ref":"/blog/2025/05/optimizing-the-perfect-day-at-disneyland-the-winning-gamspy-student-competition-project/","title":"Optimizing the Perfect Day at Disneyland: The Winning GAMSPy Student Competition Project"},{"body":"","excerpt":"","ref":"/categories/python/","title":"Python"},{"body":" Moving Towards a Green Future: GAMS, CGE Modeling, and Dual Carbon Goals On April 23, the academic event \u0026ldquo;Moving towards a Green Future: China\u0026rsquo;s Sustainable Development Path under the Dual Carbon Goals\u0026rdquo; was successfully held at Renmin University of China. Sponsored by the School of Ecology and Environment of Renmin University and co-organized by Beijing Youwan Information Technology Co., Ltd., the event brought together over 50 distinguished participants from universities, research institutions, and enterprises.\nWith a focus on \u0026ldquo;Technology Empowering Policies, Innovation-Driven Transformation,\u0026rdquo; the event showcased how GAMS and CGE modeling are playing pivotal roles in guiding China\u0026rsquo;s sustainable development and achieving the national dual carbon goals.\nAcademic Excellence: From Theory to Practical Application Lin Jie, a teacher jointly trained by China Agricultural University and Peking University, delivered an insightful lecture titled \u0026ldquo;CGE Model and GAMS Programming Basics.\u0026rdquo; His session broke down complex topics into three main modules:\nTheoretical Foundation: Lin Jie explained the significance of the Constant Elasticity of Substitution (CES) production function in CGE modeling, emphasizing its critical role in simulating technological substitutions under fluctuating energy prices, a key concern under dual carbon strategies.\nData Compilation and Management: He outlined the meticulous process of building a Social Accounting Matrix (SAM), integrating multi-source data from national statistics, energy bureaus, and industry reports to ensure robust model foundations.\nGAMS Programming in Action: Using real-world simulations like carbon tax impacts on the Beijing-Tianjin-Hebei region, Lin Jie demonstrated dynamic modeling in GAMS, showcasing variable definitions, equation scripting, and scenario-based solver execution.\nNext-Generation Policy Tools: GPSAS and Intelligent Analysis Systems Duan Meng, lecturer from the Institute of Quantitative and Technical Economics, Chinese Academy of Social Sciences, introduced cutting-edge advancements in policy simulation technology.\nGlobal Policy Simulation Analysis System (GPSAS): Integrating multi-source big data with machine learning algorithms, GPSAS offers dynamic policy effect predictions, enhancing traditional static analysis methods.\nIntelligent Analysis System: Duan showcased how natural language processing (NLP) enables automatic extraction of key policy parameters and simulates outcomes like carbon market dynamics with high precision and efficiency.\nThese innovations offer transformative tools for policymakers, enabling data-driven, real-time policy evaluations.\nA Platform for Cross-Disciplinary Collaboration The event featured an interdisciplinary dialogue among experts from:\nChinese Academy of Social Sciences Institute of Automation, Chinese Academy of Sciences China Electronics Technology Group Corporation Chinese Academy of Fiscal Sciences Discussions emphasized the critical need to integrate technical tools with policy design to achieve dual carbon goals. CGE models provide the necessary quantitative support for initiatives like carbon pricing, while systems like GPSAS help optimize market-based mechanisms such as carbon trading.\nParticipants praised the event as a \u0026ldquo;feast of intellectual exchange,\u0026rdquo; noting that it opened new academic horizons and inspired a fresh wave of interdisciplinary research initiatives.\nAs the official authorized distributor of GAMS and CGE VISUAL in China, Youwan Technology is at the forefront of supporting China\u0026rsquo;s green transformation. According to Ms. Xu Qingqing of Youwan Technology, the company offers a comprehensive service system including:\nSoftware Deployment: From local installations to cloud-based solutions. Customized Training: Tailored programs for universities, research institutes, and enterprises, covering everything from basic programming to complex policy simulations.\nProject Consulting: Expert advisory services providing quantitative support for dual-carbon policy formulation and energy-economic analysis.\nThis integrated approach ensures that academic research and practical policy applications are tightly interwoven, accelerating China\u0026rsquo;s progress toward a green and sustainable future.\nConclusion: A Clarion Call for a Greener Tomorrow The \u0026ldquo;Towards a Green Future\u0026rdquo; event at Renmin University showcased the synergistic power of technology and policy in tackling climate challenges. Through the application of CGE models, GAMS programming, and intelligent simulation tools, a robust technical roadmap is emerging to support China\u0026rsquo;s sustainable development ambitions.\nAs the clarion call for green transformation resounds, the integration of academic innovation, technological advancement, and policy design will continue to drive China towards a low-carbon, high-resilience future.\n","excerpt":"On April 23, the academic event \u0026ldquo;Moving towards a Green Future - China\u0026rsquo;s Sustainable Development Path under the Dual Carbon Goals\u0026rdquo; was successfully held at Renmin University of China. Sponsored by the School of Ecology and Environment of Renmin University and co-organized by Beijing Youwan Information Technology Co., Ltd., the event brought together over 50 distinguished participants from universities, research institutions, and enterprises.","ref":"/blog/2025/04/moving-towards-a-green-future-uone-tech-workshop/","title":"Moving Towards a Green Future - Uone Tech Workshop"},{"body":"","excerpt":"","ref":"/categories/workshop/","title":"Workshop"},{"body":"GAMS Goes Midwest: INFORMS Analytics+ 2025 in Indianapolis This April, the 2025 INFORMS Analytics+ Conference brought together the operations research and analytics community in the welcoming city of Indianapolis, Indiana. With more than 800 professionals in attendance, the event provided a great opportunity for connection, learning, and the exchange of fresh ideas.\nGAMS was pleased to be part of the event, share recent developments, and highlight our ongoing efforts to make optimization more approachable across both academia and industry.\nGAMS Exhibitor Workshop: Fast and Flexible Optimization in Python with GAMSPy Led by Adam Christensen and Steven Dirkse, our Exhibitor Workshop offered a practical introduction to GAMSPy, GAMS\u0026rsquo; new Python-native algebraic modeling tool. Participants explored how GAMSPy brings the speed and clarity of set-based algebraic modeling into Python’s flexible and familiar ecosystem.\nThe session walked attendees through the foundations of algebraic modeling, demonstrated real-world inspired workflows—from data ingestion and cleaning to model construction and solution—and showcased how GAMSPy overcomes the limitations of traditional object-oriented frameworks. A key highlight was the integration of machine learning models into optimization problems, offering a powerful way to build surrogate models for complex systems using Python tools and GAMS APIs.\nTechnology Showcase: Embedding Machine Learning Models with GAMSPy Also presented by Adam Christensen and Steven Dirkse, the Technology Showcase highlighted GAMSPy’s growing capabilities in embedding machine learning models directly into optimization workflows.\nThrough compelling examples, the session demonstrated how users can train a neural network, extract and verify the model, and embed it seamlessly into GAMSPy-based optimization models. The showcase also emphasized the value of using AMLs like GAMSPy for writing clearer, more maintainable code, and how it streamlines the develop–debug–deploy cycle by integrating with modern data pipelines and the broader GAMS toolset.\nBut the conference wasn’t all code and equations! Between sessions, the GAMS team had the chance to experience Indy’s rich cultural vibe—think iconic racing history, local eats, and some very passionate sports fans. The event proved once again that great things happen when optimization minds come together.\nAlready planning for next year? We are too. Make sure you don’t miss what’s next—subscribe to our newsletter and stay in the loop on all things GAMS!\n\u0026times; Previous Next Close ","excerpt":"At INFORMS Analytics+ 2025 in Indianapolis, GAMS showcased its latest advancements—including the Python-native modeling tool GAMSPy—through hands-on workshops and tech demos on embedding machine learning in optimization. The event offered an inspiring mix of technical insights and community connection, reaffirming the power of collaboration in the optimization space.","ref":"/blog/2025/04/gams-at-the-2025-informs-analytics-in-indianapolis/","title":"GAMS at the 2025 INFORMS Analytics+ in Indianapolis"},{"body":"","excerpt":"Alpro, a leading producer of plant-based food and beverages, faced the challenge of optimizing the complex energy management system at one of its production plants. Our consulting team at GAMS developed two custom models, along with a web-based graphical user interface (GUI), to streamline their daily operations—from data gathering to energy trading and consumption. As a result, process times were reduced by 75%, and costs were cut by 25%, helping Alpro achieve both its financial and environmental goals.","ref":"/consulting/alpro/","title":"Optimizing Energy Management at Alpro"},{"body":" GAMS at the EURO 2025 GAMS is at the 34th European Conference on Operational Research in Leeds, UK — June 22–25, 2025. Discover our latest innovations, join exclusive sessions, and explore how we optimize real-world decisions. See our program About EURO 2025 This year's event celebrates 50 years of the Association of European Operational Research Societies (EURO). On behalf of The Operational Research Society, the University of Leeds is honoured to be hosting the 34th European Conference on Operational Research in 2025. GAMS at the Conference Find us at our booth and during several highlight sessions throughout the conference. Talk to our experts, get hands-on demos, and learn how we help companies and institutions solve their toughest problems. We are there for you. Where to find us: 📍 Exhibition Area – Booth 6 📅 June 22–25, 2025 👨‍💼 Visit our booth and meet our experts. Our experts at site: André Schnabel\nJustine Broihan\nFrederik Proske\nMuhammet Soytürk\nTalks \u0026 Presentations Join our interactive sessions and discover real-world applications of mathematical optimization.\nSmart Production \u0026 Energy Bidding in Plant-Based Foods - An Alpro-GAMS Case Study Presented by: Justine Broihan, Frederik Fiand, Robin Schuchmann\nAs global demand for plant-based foods continues to rise, manufacturers like Alpro face growing complexity in managing production scheduling, coordinating distribution, and controlling energy costs. This talk provides a practical account of how Alpro uses a GAMS-based decision-support framework to streamline daily operations of their energy production as well as optimizing conditional bidding on the day-ahead market.\nGAMSPy - A Glue Between High Performance Optimization and Convenience Presented by: Muhammet Abdullah Soyturk\nA typical optimization pipeline consists of many tasks such as mathematical modeling, data processing, and data visualization. While GAMS has been providing tools with great performance for mathematical modeling, Python and its giant ecosystem provide packages for data gathering, pre/post-processing of the data, the visualization of the data and developing necessary algorithms by utilizing existing ones. In this talk, we will talk about a “glue” package GAMSPy that aims to combine these two environments to leverage the best of both worlds.\nEmbedding neural networks into optimization models with GAMSPy Presented by: André Schnabel, Hamdi Burak Usul\nGAMSPy is a powerful mathematical optimization package which integrates Python’s flexibility with GAMS’s modeling performance. Python features many widely used packages to specify, train, and use machine learning (ML) models like neural networks. GAMSPy bridges the gap between ML and conventional mathematical modeling by providing helper classes for many commonly used neural network layer formulations and activation functions. These allow a compact description of the network architecture that gets automatically reformulated into model expressions for the GAMSPy model. In this talk, we demonstrate how GAMSPy can seamlessly embed a pretrained neural network into an optimization model. We also explore the utility of GAMSPy’s automated reformulations for neural networks in various applications, such as adversarial input generation, model verification, customized training, and leveraging predictive capabilities within optimization models.\nGAMS Engine SaaS: A Cloud-Based Solution for Large-Scale Optimization Problems Presented by: Frederik Proske\nGAMS Engine SaaS is a cloud-based service that allows users to run GAMS jobs on a scalable and flexible infrastructure, currently provided by Amazon Web Services (AWS). It was launched in early 2022 and has since attracted a variety of customers who benefit from its features, such as horizontal auto-scaling, instance sizing, zero maintenance, and simplified license handling. GAMS Engine SaaS is especially suitable for workloads that require large amounts of compute power and can be adapted to many different scenarios. In this presentation, we show a case study of a large international consultant agency that uses GAMS Engine SaaS to run Monte-Carlo simulations of a large energy system model in response to varying climate change scenarios. We describe how they leverage the GAMS Engine API to submit and monitor their jobs, how they select the appropriate instance type for each job, and how they can use custom non-GAMS code on Engine SaaS. We also discuss the challenges and benefits of using GAMS Engine SaaS for this type of application, and provide some insights into the future development of the service.\nA parallelisation framework for solving challenging integrated long-haul and local vehicle routing problems Presented by: Stephen Maher\nThe integrated long-haul and local vehicle routing problem with an adaptive transportation network is a very challenging optimisation problem. The adaptive nature of the transportation network means that the resulting optimisation problem is extremely large and difficult to solve directly using general purpose solvers. As such, the best approach for finding high quality solutions is to use heuristics combined with a branch-and-bound algorithm. Our research has developed a parallelisation framework that concurrently executes heuristic and exact approached to find high-quality solutions to the integrated long-haul and local vehicle routing problem. Within the parallelisation framework we have attempted to solve the complete problem directly using a MIP solver and by applying Benders’ decomposition. The results will show that the use of parallelisation and applying Benders’ decomposition increases the scale of problems that can solved and improves the upper and lower bounds that can be achieved.\nThe SCIP Optimization Suite 10 Presented by: Stefan Vigerske\nIn this year, the SCIP Optimization Suite reaches its first double-digit major version number. Starting with an algebraic modeling language, a simplex solver, and a constraint integer programming framework, containing the world’s best non-commercial mixed-integer programming solver, it has evolved over the last 20+ years into a swiss army knife for anything where relaxations are subdivided, trimmed, generated dynamically, and eventually solved, be it on embedded, ordinary, or super-computers. The newest iteration brings major updates for the presolving library PaPILO, the generic decomposition solver GCG, and the branch-cut-and-price framework SCIP itself. In this talk, we will give a short overview on the current SCIP Optimization Suite ecosystem and catch a glimpse on the new features contributed by over 15 developers in the newest major release.\nPrevious Next Getting Started with GAMSPy Read the Installation and Licensing instructions in our documentation. Check out the Quickstart Guide to learn basic concepts. The GAMSPy open-source GitHub repository can be accessed here. Please use the GAMSPy section of our forum for questions and support. Leave Some Feedback and Receive a Free GAMS License We'd love to hear your thoughts about EURO 2025! Fill out this short form and receive an unrestricted GAMS license for two months.\nLoading… ","excerpt":"\u003cstyle\u003e\n .jumbotron-campaign {\n position: relative;\n background-image: url('Leeds_cut.jpg'); /* Exchange picture for campaign */\n background-size: cover;\n background-position: center;\n height: 400px;\n z-index: 1;\n overflow: hidden;\n }\n \n .jumbotron-campaign .overlay {\n content: \"\";\n background: rgba(0, 0, 0, 0.4); /* Dunkles Overlay */\n position: absolute;\n top: 0;\n left: 0;\n height: 100%;\n width: 100%;\n z-index: 1;\n }\n \n .jumbotron-campaign .container {\n position: relative;\n z-index: 2;\n }\n \n .carousel .card {\n min-height: 450px; /* oder 500px, je nach Inhalt */\n }\n .carousel-item {\n display: flex;\n align-items: center;\n justify-content: center;\n padding-left: 100px;\n padding-right: 100px;\n }\n\n .carousel-control-prev,\n .carousel-control-next {\n background: transparent !important;\n border: none;\n }\n\n .carousel-control-prev-icon,\n .carousel-control-next-icon {\n filter: brightness(0) saturate(100%) invert(47%) sepia(95%) saturate(480%) hue-rotate(359deg) brightness(98%) contrast(101%);\n /* Das ergibt ein GAMS-oranges Icon */\n width: 30px;\n height: 30px;\n }\n \n\u003c/style\u003e\n\n\n\n\u003c!-- Website Content and Information --\u003e\n\n\u003csection\u003e\n \u003cdiv class=\"full-width\"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid jumbotron-campaign position-relative\"\u003e\n \u003cdiv class=\"overlay position-absolute w-100 h-100\"\u003e\u003c/div\u003e\n \n \u003cdiv class=\"container position-relative text-white\"\u003e\n \u003ch1 class=\"display-1 mt-3 text-white\"\u003eGAMS at the EURO 2025\u003c/h1\u003e\n \u003cp class=\"lead\"\u003e\n GAMS is at the 34th European Conference on Operational Research in Leeds, UK — June 22–25, 2025. \n \u003cbr\u003e\n Discover our latest innovations, join exclusive sessions, and explore how we optimize real-world decisions. \n \u003cbr\u003e \n \u003c/p\u003e","ref":"/campaign/euro_2025/","title":"EURO 2025 in Leeds"},{"body":" Find the right license setup for you Our Products GAMS A powerful modeling system for mathematical optimization.\nLearn More GAMSPy Seamlessly integrates GAMS with Python for flexible modeling.\nLearn More GAMS MIRO Turn GAMS models into interactive web applications.\nLearn More GAMS Engine Scalable solution to solve GAMS models on-premise or in the cloud.\nLearn More Licensing Options We offer tailored licensing solutions to meet your needs:\nUser-Based Licenses For individuals or small teams, local (work offline) or network (require Internet access) licenses are available.\nDeployment Solutions EngineOne Self-hosted optimization applications.\nEngine SaaS A pay-as-you-go solution hosted by GAMS on AWS that includes hardware and license fees.\n*For legacy use cases, we also offer machine-based licenses. Renew Maintenance If you already own a GAMS or GAMSPy license, please contact us at sales@gams.com to renew your maintenance.\nPlease provide the following details to help us assist you [ i ] Stay connected! Adding a private or alternate email ensures you won’t miss important information if your work address becomes unavailable. This is highly recommended! Choose your country... United States United Kingdom Canada Germany France India China Australia Japan South Korea Brazil Mexico Italy Spain Netherlands Switzerland Sweden Russia Turkey South Africa Saudi Arabia Singapore Indonesia Argentina Poland Belgium Other (please specify below) Products \u0026 Services you are interested in GAMS / GAMSPy GAMS Solvers GAMS MIRO GAMS Engine Consulting Services EngineOne (on-premise hosting)\nEngineSaaS (hosted by GAMS) Solvers you are interested in The following modules are available at no additional cost and are part of the GAMS/Base Module: CBC, CONVERT, DE, EXAMINER, GAMSCHK, GUSS, IPOPT, JAMS, KESTREL, LogMIP, MILES, NLPEC, and SHOT. Select Solvers \u0026times; Select Solvers Confirm Selection Number of cores on your machine 12 or fewer cores (default) More than 12 cores Enter the number of cores: Evaluation License Yes, I require an evaluation license. What else do we need to know? Submit \u0026times; Choose your Email Client Your license request email has been prepared. Please click one of the options below to open it in your preferred client:\nOpen in Gmail (Web) Open in Outlook (Web) Or use your **local** email program (e.g., Outlook app, Thunderbird):\nOpen Local Client (`mailto`) Additional Material and Links GAMS System: Flyer | Technical Documentation GAMSPy: Flyer | Technical Documentation GAMS Engine: Flyer | Technical Documentation GAMS MIRO: Flyer | Technical Documentation Consulting Services: Flyer | Further Information ","excerpt":"\u003c!-- Style Sheet --\u003e\n\u003cstyle\u003e\n\n .notes-field {\n width: 100%; /* Breite auf 100% des Containers setzen */\n height: 150px; /* Höhe anpassen */\n padding: 8px; /* Innenabstand für besseren Look */\n font-size: 16px; /* Größere Schrift */\n border: 1px solid #ccc; /* Dezentere Umrandung */\n border-radius: 5px; /* Abgerundete Ecken */\n }\n\n .licensing-options {\n text-align: left;\n padding: 20px;\n }\n \n /* Container für die Engine-Optionen */\n .engine-container {\n display: flex;\n gap: 20px;\n justify-content: left;\n flex-wrap: wrap; /* Falls der Platz zu klein wird */\n margin-top: 10px;\n }\n /* Große Box soll beide kleineren Boxen umschließen */\n .engine-box-large {\n background: #f4f4f4;\n padding: 20px;\n border-radius: 8px;\n box-shadow: 2px 2px 10px rgba(0, 0, 0, 0.1);\n text-align: left;\n width: 100%;\n }\n\n .license-container {\n display: flex;\n flex-wrap: wrap;\n justify-content: center;\n gap: 20px;\n margin-bottom: 20px;\n }\n \n /* Standard-Boxen (1/3 der Breite) */\n .license-box {\n background: #f9f9f9;\n padding: 15px;\n border-radius: 8px;\n flex: 1;\n min-width: 280px;\n max-width: 50%; /* Zwei Boxen nebeneinander */\n box-shadow: 2px 2px 10px rgba(0, 0, 0, 0.1);\n text-align: left;\n }\n \n /* Große Boxen nehmen exakt die Breite von zwei normalen Boxen ein */\n .license-box-large {\n flex: 2; /* Nimmt den Platz von 2 normalen Boxen ein */\n max-width: 100%;\n }\n \n /* Responsives Verhalten */\n @media (max-width: 768px) {\n .license-box,\n .license-box-large {\n max-width: 100%;\n flex: 1;\n }\n .engine-container {\n flex-direction: column;\n }\n }\n \n .license-box h3,\n .license-box-large h3 {\n margin-top: 0;\n }\n \n .license-box a,\n .license-box-large a {\n color: #e68a00;\n text-decoration: none;\n font-weight: bold;\n }\n \n .license-box a:hover,\n .license-box-large a:hover {\n text-decoration: underline;\n }\n \n /* Modal (Popup) Hintergrund */\n .modal {\n display: none; /* Standardmäßig versteckt */\n position: fixed;\n z-index: 1000;\n left: 0;\n top: 0;\n width: 100%;\n height: 100%;\n background-color: rgba(0,0,0,0.5);\n display: flex;\n justify-content: center;\n align-items: center;\n }\n\n /* Modal-Design für bessere Lesbarkeit */\n .modal-content {\n background-color: white;\n padding: 20px;\n border-radius: 8px;\n width: 60%; /* Breiteres Popup für mehr Platz */\n max-width: 800px;\n text-align: left; /* Linksbündige Ausrichtung */\n }\n\n /* Checkboxen \u0026 Solver-Texte linksbündig */\n #solverCheckboxes {\n text-align: left; /* Alle Inhalte linksbündig */\n max-height: 400px; /* Falls viele Solver da sind: Scrollbar */\n overflow-y: auto;\n padding: 10px;\n }\n\n /* Labels mit Checkboxen verbessern */\n #solverCheckboxes label {\n display: flex;\n align-items: flex-start;\n gap: 10px; /* Abstand zwischen Checkbox und Text */\n margin-bottom: 8px; /* Etwas Abstand zwischen Einträgen */\n }\n\n /* Checkboxen größer und besser klickbar machen */\n #solverCheckboxes input[type=\"checkbox\"] {\n transform: scale(1.2); /* Checkbox leicht vergrößern */\n margin-top: 3px;\n }\n\n /* Breitere Solver-Textbeschreibungen */\n #solverCheckboxes label span {\n flex-grow: 1; /* Damit sich der Text anpasst */\n max-width: 90%; /* Breite des Textes steuern */\n }\n\n /* Schließen-Button */\n .close {\n float: right;\n font-size: 24px;\n cursor: pointer;\n }\n\n .close:hover {\n color: red;\n }\n\n .product-container {\n display: flex;\n flex-wrap: wrap;\n justify-content: space-around;\n gap: 20px;\n margin-top: 20px;\n }\n\n .product-box {\n width: 250px;\n padding: 20px;\n text-align: center;\n border: 1px solid #ddd;\n border-radius: 10px;\n box-shadow: 2px 2px 10px rgba(0, 0, 0, 0.1);\n transition: transform 0.2s ease-in-out;\n background-color: #f4f4f4;\n }\n\n .product-box:hover {\n transform: translateY(-5px);\n }\n\n .product-box img {\n height: 80px; /* Einheitliche Logo-Größe */\n margin-bottom: 10px;\n }\n\n .product-box h3 {\n font-size: 1.2em;\n margin-bottom: 10px;\n }\n\n .product-box a {\n display: inline-block;\n margin-top: 10px;\n text-decoration: none;\n color: #e68a00;\n font-weight: bold;\n }\n\n .product-box a:hover {\n text-decoration: underline;\n }\n\n .solver-label {\n display: flex;\n align-items: center;\n gap: 8px;\n margin-bottom: 5px;\n }\n \n .info-icon {\n cursor: pointer;\n font-size: 14px;\n color: #e68a00;\n display: inline-block;\n margin-left: 8px;\n }\n \n .tooltip {\n position: absolute;\n background-color: white;\n color: black;\n padding: 8px 12px; /* Etwas mehr Padding für bessere Lesbarkeit */\n border-radius: 6px;\n font-family: 'Montserrat', sans-serif;\n font-size: 14px; /* Erhöhe die Schriftgröße */\n max-width: 250px; /* Begrenzung der Breite */\n word-wrap: break-word; /* Zeilenumbruch bei langen Wörtern */\n white-space: normal; /* Erlaubt Umbruch zwischen Wörtern */\n z-index: 9999; /* Tooltip bleibt sichtbar */\n pointer-events: none;\n display: none;\n box-shadow: 0px 0px 5px rgba(0, 0, 0, 0.3);\n }\n\n .solver-table {\n display: flex;\n flex-direction: column;\n width: 100%;\n border-collapse: collapse;\n }\n \n .solver-header {\n display: grid;\n grid-template-columns: 5% 25% 55% 10%; /* Anordnung: Checkbox | Name | Beschreibung | ℹ️ */\n font-weight: bold;\n background-color: #f4f4f4;\n padding: 10px;\n text-align: left;\n border-bottom: 2px solid #ccc;\n }\n \n .solver-item {\n display: grid;\n grid-template-columns: 5% 25% 55% 10%; /* Gleiche Spalten wie Header */\n padding: 10px;\n align-items: center;\n border-bottom: 1px solid #ddd;\n }\n \n .solver-checkbox {\n display: flex;\n justify-content: center;\n }\n \n .solver-link {\n color: #007bff;\n text-decoration: none;\n }\n \n .solver-link:hover {\n text-decoration: underline;\n }\n \n .solver-description {\n color: #333;\n }\n \n .solver-info {\n text-align: center;\n }\n\n .tooltip-box {\n position: absolute;\n top: 50%;\n right: 20px; /* place inside input, on the right */\n transform: translateY(-50%);\n cursor: pointer;\n color: orange;\n font-weight: bold;\n font-size: 14px;\n }\n\n .tooltip-box .tooltip-text {\n visibility: hidden;\n width: 220px;\n background-color: orange;\n color: #fff;\n text-align: left;\n padding: 8px;\n border-radius: 5px;\n position: absolute;\n z-index: 1;\n top: -5px;\n left: 120%; /* tooltip appears to the right */\n opacity: 0;\n transition: opacity 0.3s;\n }\n\n .tooltip-box:hover .tooltip-text {\n visibility: visible;\n opacity: 1;\n }\n\u003c/style\u003e\n\n\u003c!-- Script for Online Form --\u003e\n\u003cscript\u003e\n\n // Email Build Up\n function sendEmail(event) {\n event.preventDefault(); // Prevents page reload\n \n const form = document.getElementById(\"contactForm\");\n \n // Ensure the core input field is enabled before collecting data\n const coreInputField = document.getElementById(\"coreCount\");\n const wasDisabled = coreInputField.disabled; // Track if it was disabled\n coreInputField.disabled = false; // Temporarily enable it\n \n const formData = new FormData(form); // Collect form data\n \n // Manually set CoreCount if missing\n if (!formData.has(\"CoreCount\")) {\n formData.append(\"CoreCount\", coreInputField.value);\n }\n \n coreInputField.disabled = wasDisabled; // Restore previous state\n \n // Correctly get the selected country\n const countrySelect = document.getElementById(\"country\");\n const otherCountryInput = document.getElementById(\"otherCountry\");\n let selectedCountry = countrySelect.value === \"Other\" ? otherCountryInput.value.trim() : countrySelect.value;\n\n // **Load the template from file**\n fetch(\"email_template.txt\")\n .then(response =\u003e {\n if (!response.ok) {\n throw new Error(`HTTP error! Status: ${response.status}`);\n }\n return response.text();\n })\n .then(emailTemplate =\u003e {\n let selectedProducts = [];\n\n // Properly collect all checked product checkboxes\n document.querySelectorAll('input[name=\"Products\"]:checked').forEach(cb =\u003e {\n selectedProducts.push(cb.value);\n });\n\n // If \"GAMS Engine\" is selected, attach the chosen engine option\n const engineCheckbox = document.getElementById(\"gamsEngineCheckbox\");\n if (engineCheckbox.checked) {\n const selectedEngineOption = document.querySelector('input[name=\"EngineOption\"]:checked');\n if (selectedEngineOption) {\n // Find \"GAMS Engine\" in selectedProducts and modify it\n const index = selectedProducts.indexOf(\"GAMS Engine\");\n if (index !== -1) {\n selectedProducts[index] = `GAMS Engine (${selectedEngineOption.value})`; // Attach engine option to \"GAMS Engine\"\n } else {\n selectedProducts.push(`GAMS Engine (${selectedEngineOption.value})`); // Fallback: Add separately if missing\n }\n }\n }\n\n // Convert selected products into a single string, comma-separated\n let productListString = selectedProducts.length \u003e 0 ? selectedProducts.join(\", \") : \"None selected.\";\n emailTemplate = emailTemplate.replace(\"{Products}\", productListString);\n \n // Fill the template with form data (excluding checkboxes)\n formData.forEach((value, key) =\u003e {\n if (form.elements[key] \u0026\u0026 form.elements[key].type !== \"checkbox\") {\n emailTemplate = emailTemplate.replace(new RegExp(`{${key}}`, \"g\"), value);\n }\n });\n \n // Ensure the selected country is inserted\n emailTemplate = emailTemplate.replace(\"{Country}\", selectedCountry || \"Not provided\"); \n\n // Ensure core count is properly included\n let coreValue = formData.get(\"CoreCount\") || \"Default\"; // Make sure name matches the form field\n emailTemplate = emailTemplate.replace(\"{CoreCount}\", coreValue);\n\n // Ensure additional notes are included\n emailTemplate = emailTemplate.replace(\"{Additional Notes}\", formData.get(\"Additional Notes\") || \"No additional comments.\");\n\n // Check if the Evaluation License checkbox is checked\n const evaluationCheckbox = document.getElementById(\"evaluationLicenseCheckbox\");\n let evaluationLicenseValue = evaluationCheckbox.checked ? \"Yes\" : \"No\";\n\n // Replace the placeholder in the email template\n emailTemplate = emailTemplate.replace(\"{EvaluationLicense}\", evaluationLicenseValue);\n\n // Ensure all chosen solvers are properly included\n emailTemplate = emailTemplate.replace(\"{Solvers}\", getSelectedSolversForEmail());\n\n // **Create the final Mailto URL**\n const subject = encodeURIComponent(\"GAMS License Inquiry - Contact Form\");\n const body = encodeURIComponent(emailTemplate);\n \n // Generate all three possible email links\n const gmailUrl = `https://mail.google.com/mail/?view=cm\u0026fs=1\u0026to=sales@gams.com\u0026su=${subject}\u0026body=${body}`;\n const outlookUrl = `https://outlook.live.com/owa/?path=/mail/action/compose\u0026to=sales@gams.com\u0026subject=${subject}\u0026body=${body}`;\n const mailtoUrl = `mailto:sales@gams.com?subject=${subject}\u0026body=${body}`;\n\n // Inject the links into the hidden modal/prompt\n document.getElementById(\"openMailtoLink\").href = mailtoUrl;\n document.getElementById(\"openGmailLink\").href = gmailUrl;\n document.getElementById(\"openOutlookLink\").href = outlookUrl;\n \n // Show the modal to the user\n document.getElementById(\"emailSelectionModal\").style.display = \"flex\";\n\n })\n .catch(error =\u003e {\n console.error(\"Error loading the email template:\", error);\n alert(\"The email template could not be loaded.\");\n });\n } \n \n // Other Country Option\n function toggleOtherField() {\n var countrySelect = document.getElementById(\"country\");\n var otherCountryDiv = document.getElementById(\"otherCountryDiv\");\n var otherCountryInput = document.getElementById(\"otherCountry\");\n \n if (countrySelect.value === \"Other\") {\n otherCountryDiv.style.display = \"block\";\n if (otherCountryInput) {\n otherCountryInput.required = true;\n }\n } else {\n otherCountryDiv.style.display = \"none\";\n if (otherCountryInput) {\n otherCountryInput.required = false;\n otherCountryInput.value = \"\"; // Zurücksetzen der Eingabe\n }\n }\n }\n\n // Core Options\n function toggleCoreInput(show) {\n var coreInputDiv = document.getElementById(\"coreInput\");\n var coreInputField = document.getElementById(\"coreCount\");\n \n if (show) {\n coreInputDiv.style.display = \"block\";\n coreInputField.disabled = false; // Enable input\n } else {\n coreInputDiv.style.display = \"none\";\n coreInputField.disabled = true; // Disable input\n coreInputField.value = \"\"; // Reset value to default\n }\n }\n\n // Engine Options\n function toggleEngineOptions() {\n let engineCheckbox = document.getElementById(\"gamsEngineCheckbox\");\n let engineOptions = document.getElementById(\"engineOptions\");\n \n if (engineCheckbox.checked) {\n engineOptions.style.display = \"block\";\n } else {\n engineOptions.style.display = \"none\";\n // Radio-Buttons zurücksetzen, wenn GAMS Engine abgewählt wird\n let radios = engineOptions.querySelectorAll(\"input[type='radio']\");\n radios.forEach(radio =\u003e radio.checked = false);\n }\n }\n\n // Funktion, um gewählte Solver ins E-Mail-Template einzufügen\n function getSelectedSolversForEmail() {\n return document.getElementById(\"selectedSolversText\").value || \"No solvers selected\";\n }\n\n\u003c/script\u003e \n\n\u003c!-- Script for Online Form (ONLY SOLVERS)--\u003e\n\u003cscript type=\"module\"\u003e\n import { solvers } from \"./solvers.js\"; // Importiere die Solver-Daten\n\n document.addEventListener(\"DOMContentLoaded\", () =\u003e {\n const solverModal = document.getElementById(\"solverModal\");\n const openSolverModal = document.getElementById(\"openSolverModal\");\n const closeModal = document.querySelector(\".close\");\n const solverCheckboxes = document.getElementById(\"solverCheckboxes\");\n const confirmSolvers = document.getElementById(\"confirmSolvers\");\n const selectedSolversText = document.getElementById(\"selectedSolversText\");\n\n if (!solverModal || !openSolverModal || !closeModal || !solverCheckboxes || !confirmSolvers || !selectedSolversText) {\n console.error(\"Ein oder mehrere DOM-Elemente wurden nicht gefunden!\");\n return;\n }\n \n // Tooltip-Element erstellen (global für alle Info-Icons)\n const tooltip = document.createElement(\"div\");\n tooltip.className = \"tooltip\";\n tooltip.style.display = \"none\";\n document.body.appendChild(tooltip);\n\n // Event-Listener für Tooltip-Anzeige\n solverCheckboxes.addEventListener(\"mouseover\", (event) =\u003e {\n if (event.target.classList.contains(\"info-icon\")) {\n tooltip.innerText = event.target.dataset.info;\n tooltip.style.display = \"block\";\n\n const rect = event.target.getBoundingClientRect();\n tooltip.style.left = `${rect.left + window.scrollX + 20}px`;\n tooltip.style.top = `${rect.top + window.scrollY}px`;\n\n // Tooltip verschwindet, wenn die Maus das Icon verlässt\n event.target.addEventListener(\"mouseleave\", () =\u003e {\n tooltip.style.display = \"none\";\n });\n\n }\n });\n\n // Solver nach Lizenztyp gruppieren\n const solverGroups = {\n commercial: [],\n openSource: [],\n solverLink: []\n };\n\n Object.values(solvers).forEach(solver =\u003e {\n const solverLink = solver.docu \n ? `\u003ca href=\"${solver.docu}\" target=\"_blank\" class=\"solver-link\"\u003e${solver.name}\u003c/a\u003e` \n : solver.name;\n \n // Info-Box (wenn Info vorhanden ist)\n const tooltipBox = solver.info \n ? `\u003cspan class=\"info-icon\" data-info=\"${solver.info}\"\u003eℹ️\u003c/span\u003e` \n : \"\";\n \n const checkboxHTML = `\n \u003cdiv class=\"solver-item\"\u003e\n \u003cdiv class=\"solver-checkbox\"\u003e\n \u003cinput type=\"checkbox\" value=\"${solver.name}\"\u003e\n \u003c/div\u003e\n \u003cdiv class=\"solver-name\"\u003e${solverLink}\u003c/div\u003e\n \u003cdiv class=\"solver-description\"\u003e${solver.description}\u003c/div\u003e\n \u003cdiv class=\"solver-info\"\u003e${tooltipBox}\u003c/div\u003e\n \u003c/div\u003e`;\n \n if (solver.license === \"commercial\") {\n solverGroups.commercial.push(checkboxHTML);\n } else if (solver.license === \"open source\") {\n solverGroups.openSource.push(checkboxHTML);\n } else if (solver.license === \"solver link\") {\n solverGroups.solverLink.push(checkboxHTML);\n }\n });\n\n // HTML für die Solver-Checkboxen erstellen\n solverCheckboxes.innerHTML = `\n \u003cstrong\u003eCommercial Solvers\u003c/strong\u003e\u003cbr\u003e${solverGroups.commercial.join('')}\n \u003cbr\u003e\u003cstrong\u003eOpen-Source Solvers\u003c/strong\u003e\u003cbr\u003e${solverGroups.openSource.join('')}\n \u003cbr\u003e\u003cstrong\u003eSolver-Links\u003c/strong\u003e\u003cbr\u003e${solverGroups.solverLink.join('')}\n `;\n\n // **Tooltip-Logik**\n solverCheckboxes.addEventListener(\"mouseover\", (event) =\u003e {\n if (event.target.classList.contains(\"info-icon\")) {\n tooltip.innerText = event.target.dataset.info;\n tooltip.style.display = \"block\";\n\n const rect = event.target.getBoundingClientRect();\n tooltip.style.left = `${rect.left + window.scrollX + 20}px`;\n tooltip.style.top = `${rect.top + window.scrollY}px`;\n }\n });\n\n solverCheckboxes.addEventListener(\"mouseleave\", (event) =\u003e {\n if (event.target.classList.contains(\"info-icon\")) {\n tooltip.style.display = \"none\";\n }\n });\n\n // Öffnen des Popups\n openSolverModal.addEventListener(\"click\", () =\u003e {\n solverModal.style.display = \"flex\";\n });\n\n // Schließen des Popups\n closeModal.addEventListener(\"click\", () =\u003e {\n solverModal.style.display = \"none\";\n });\n\n // Bestätigung speichert nur die Auswahl und schließt das Popup\n confirmSolvers.addEventListener(\"click\", (event) =\u003e {\n event.preventDefault();\n const selected = Array.from(solverCheckboxes.querySelectorAll(\"input:checked\"))\n .map(cb =\u003e cb.value);\n selectedSolversText.value = selected.length \u003e 0 ? selected.join(\", \") : \"No solvers selected\";\n solverModal.style.display = \"none\"; // Popup schließen\n });\n\n // Schließen, wenn man außerhalb des Popups klickt\n window.addEventListener(\"click\", (event) =\u003e {\n if (event.target === solverModal) {\n solverModal.style.display = \"none\";\n }\n });\n });\n\n\u003c/script\u003e\n\n\u003c!-- Website Content and Information --\u003e\n\u003csection\u003e\n \u003cdiv class=\"full-width\"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid\"\u003e\n \u003cdiv class=\"container\"\u003e\n \u003ch1 class=\"display-4\"\u003eFind the right license setup for you\u003c/h1\u003e\n \u003c!-- Product Boxes --\u003e\n \u003ch2\u003eOur Products\u003c/h2\u003e\n \u003cdiv class=\"product-container\"\u003e \n \u003c!-- GAMS Box --\u003e\n \u003cdiv class=\"product-box\"\u003e\n \u003cimg src=\"GAMS.png\" alt=\"GAMS Logo\"\u003e\n \u003ch3\u003eGAMS\u003c/h3\u003e\n \u003cp\u003eA powerful modeling system for mathematical optimization.\u003c/p\u003e","ref":"/buy_gams/","title":"Get A Quote"},{"body":" Area: Energy\nProblem class: MIP\nTechnologies: SaaS, GAMS, GAMS MIRO\nAlpro: Optimizing Energy Management Introduction Alpro is one of Europe’s leaders in the production of plant-based products. Based in Belgium, France, and the UK, they market a wide range of food and beverages made of soy, almonds, hazelnuts, cashews, rice, oats, and coconut. With a big operation in place, their factories handle complex processes that demand substantial energy, and optimizing its management is an essential, yet often quite difficult task.\nIn this particular project, Alpro partnered with our consulting team at GAMS to maximize the efficiency of one of their plants’ energy systems. Our shared goal was simple: to design a tool that would help streamline their day-to-day energy operations, and facilitate a better and more data-driven decision making.\nWorking together, we developed two custom optimization models, along with a web-based graphical user interface (GUI) that drastically transformed their workflow. As a result, Alpro not only gained a competitive advantage in their energy management, but also took a significant step towards improving their operational costs and achieving their sustainability goals at the same time.\nThe Problem Managing energy systems in industrial facilities is often a very complicated task. In Alpro’s case, they needed to integrate their own electricity and steam production from generators, boilers, and solar panels, with outside energy sources; all in order to meet the fluctuating demand of their daily operation.\nThis presented a huge challenge for the team.\nConstantly manipulating their energy assets to meet demand was a laborious and complicated task on its own. But at the same time, the option of buying energy from the grid, which sometimes is cheaper than producing their own, or selling their generation surplus back to them for a profit, added a whole new level of complexity.\nNow factor in that grid energy has constant price variations, that their solar panels depend on weather conditions, that their facility’s energy demand changes throughout the day, and that they have to follow a maintenance schedule, and what you get is a highly intricate and tedious to manage system.\nThe Solution The path to a simpler and more efficient system was clear: to mathematically optimize the process –to take all variables and constraints as data input, and build a model that could suggest the best energy management solution at any given time.\nIn close collaboration with Alpro, our team of experts broke the problem down into two parts and developed a custom model to deal with each step. Then, we integrated both models into a single, user friendly, and intuitive GUI.\nHere’s how it works.\nFig 1. GUI data input screen. Users control multiple widgets to set the information and conditions for the optimization models. (dummy data) The first optimization model is in charge of gathering data from multiple sources, and generating conditional bids for the electricity day-ahead market –whether Alpro should buy, or sell energy, and under what conditions. Information from spot price forecasts, solar energy forecasts, factory demand projections, and maintenance schedules is automatically collected and processed; then, Alpro’s operational goals and financial considerations are factored in, and the resulting bids are submitted to the day-ahead auction.\nThis first step of the process saves Alpro valuable time, all while ensuring that their energy planning is based on the best and most up-to-date information.\nFig 2. The first model produces conditional bids for the day ahead market. In this example: for cleared spot prices below 30€/MWh offer to buy 0.8MW, for prices between 30€/MWh and 69.99€/MWh do not sell or buy, and for prices above 70€ offer to sell 0.8MW. (dummy data) When the day-ahead energy market clears, the second model kicks in.\nThis model is in charge of retrieving the accepted bids from the market, and then using that information to optimize the load of each of Alpro’s energy generating assets. This is done taking into account the factory’s energy consumption prediction, and also the expected production from their own solar plant. By finding the optimal solution, Alpro not only covers their energy needs, but also makes sure to take full advantage of their renewable generation.\nThe second model’s output is information on how to operate each asset on a 15-minute interval basis. This detailed data is automatically passed onto their control systems, which take over the job for the rest of the day, ensuring constant and optimal management with minimal work.\nFigure 3. Result dashboard on how to operate each energy generation asset over time, and various KPIs on the right. (dummy data)) Figure 4. Information on the mix of energy production used to meet the factory demand. (dummy data)) Conclusion With the two models working together, there’s a seamless integration between information, energy trading, and operational execution. More than that, while all the complicated math happens in the background, the data and results are clearly displayed in an intuitive dashboard. This new GUI provides Alpro with clear insights into asset operation, day-ahead trades, solar production, and multiple key performance indicators. It’s an easy and user-friendly interface, which allows for informed decisions, and to continually improve every aspect of their energy usage.\n“Thanks to this new solution, our employees can control our energy assets in a quarter of the time it took before”, said Dominique Hamerlinck, Energy Manager at Alpro. And it isn’t only about speed and efficiency; Dominique emphasized that one of the main incentives to deploy GAMS models was to minimize costs and reduce their environmental footprint. “By optimizing the process from beginning to end, we can make the most out of our new solar panels, maximize the amount of energy from renewable sources used in our factory, and bring our operational costs down at the same time”.\nSince Alpro deployed their optimization models, the energy management process is 3 times faster than before, while costs have been reduced by 25%.\n","excerpt":"Alpro, a European leader in plant-based food products, faced the challenge of optimizing its factory’s energy management. With the help of our consulting team, the company implemented two custom models and a graphical user interface, streamlining its operational pipeline from data gathering to daily energy trading and consumption.","ref":"/stories/alpro/","title":"Alpro: Optimizing Energy Management"},{"body":"GAMS Technical Support In case of technical problems please contact the GAMS support at support@gams.com .\nTo speed up the process, please provide the following information:\nThe GAMS Distribution and the solver you are using. Your GAMS access code or your GAMS license-id (DC.. in the fifth line of your GAMS license file). See example below: The platform you are working on (Windows, macOS, Linux, 32 or 64 bit) A copy of the GAMS log and lst file. Any other information that may help us to reproduce the problem, for example a (compressed) GAMS file. In case there are confidential issues, you may send us a scalar version of your model, click here for further instructions . If requested, we are happy to sign a non-disclosure agreement (NDA), also known as a confidentiality agreement (CA). How to send files Via email Our e-mail system can handle attachments up to a size of 20MB and does not accept executable files (such as files ending in .exe, bat, ps, or cmd), even if they are sent in a zipped (.zip, .tar, .tgz, .taz, .z, .gz) format. Any message of this type sent to us will be bounced back you. Adding an underscore to the file attachment (e.g. file.zip_) will help. Using our upload facility If you want to send us a larger file you can use our upload facility. Make sure to send an accompanying email to support@gams.com to let us know what you sent. In general we will get back to you within 8 business hours. In case you do not receive a reply within 16 business hours then please see our contact page for alternative ways of contacting us.\n","excerpt":"\u003ch1 id=\"gams-technical-support\"\u003eGAMS Technical Support\u003c/h1\u003e\n\u003cp\u003eIn case of technical problems please contact the GAMS support at \u003ca href=\"mailto:support@gams.com\"\u003esupport@gams.com\u003c/a\u003e\n.\u003c/p\u003e\n\u003cp\u003eTo speed up the process, please provide the following information:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eThe GAMS Distribution and the solver you are using.\u003c/li\u003e\n\u003cli\u003eYour GAMS access code or your GAMS license-id (DC.. in the fifth line of your GAMS license file). See example below:\u003c/li\u003e\n\u003c/ul\u003e\n\u003cfigure\u003e\u003cimg src=\"/img/license_example_2025.png\" width=\"700px\"\u003e\n\u003c/figure\u003e\n\n\u003cul\u003e\n\u003cli\u003eThe platform you are working on (Windows, macOS, Linux, 32 or 64 bit)\u003c/li\u003e\n\u003cli\u003eA copy of the GAMS log and lst file.\u003c/li\u003e\n\u003cli\u003eAny other information that may help us to reproduce the problem, for example a (compressed) GAMS file.\u003c/li\u003e\n\u003cli\u003eIn case there are confidential issues, you may send us a scalar version of your model, \u003ca href=\"/latest/docs/S_CONVERT.html\" target=\"_blank\"\u003eclick here for further instructions\u003c/a\u003e\n.\u003c/li\u003e\n\u003cli\u003eIf requested, we are happy to sign a non-disclosure agreement (NDA), also known as a confidentiality agreement (CA).\u003c/li\u003e\n\u003c/ul\u003e\n\u003cdiv class=\"card my-5\"\u003e\n\u003cdiv class=\"card-header\"\u003e\u003ch2 class=\"mt-1\"\u003eHow to send files\u003c/h2\u003e\u003c/div\u003e\n\u003cdiv class=\"card-body\"\u003e\n\u003ch3\u003eVia email\u003c/h3\u003e\n\u003cp\u003e\n Our e-mail system can handle attachments up to a size of 20MB and does not accept executable files (such as files ending in .exe, bat, ps, or cmd), even if they are sent in a zipped (.zip, .tar, .tgz, .taz, .z, .gz) format. Any message of this type sent to us will be bounced back you. Adding an underscore to the file attachment (e.g. file.zip_) will help.\n\u003c/p\u003e","ref":"/support/","title":"Technical Support"},{"body":" Area: Climate Modeling, Policy\nProblem class: Large-scale Monte Carlo\nTechnologies: SaaS, GAMS, GAMS Engine\nThe Rhodium Climate Outlook Executive Summary This case study examines the successful use of GAMS Engine SaaS by Rhodium Group to develop a comprehensive climate outlook ahead of the COP28 summit in Dubai. Rhodium Group leveraged advanced modeling techniques, including the RHG-GEM model and Monte Carlo simulations, to provide robust projections of energy, emissions, and temperature through the end of the century. GAMS Engine SaaS played a crucial role in providing flexible and extensive computational resources to support these simulations.\nKey Points Climate Change Projections: Climate change remains one of the most pressing challenges of our time, requiring detailed and accurate projections to guide policymaking. Rhodium Group\u0026rsquo;s climate outlook aims to provide these projections by accounting for various uncertainties in economic and population growth, commodity prices, and technology costs.\nRHG-GEM Model: The RHG-GEM model, initially a modified version of the U.S Energy Information Administration’s (EIA) World Energy Modeling System, and then developed into a completely new model, provides insights into global energy and climate dynamics across 16 world regions. It integrates the Electric and Emerging Clean Technology module (REEM) and its outputs are linked to the Finite Amplitude Impulse Response (FaIR) model to deliver detailed projections.\nMonte Carlo Analysis: To address uncertainties, Rhodium Group employed a Monte Carlo Analysis with Latin hypercube sampling, which involved 4726 simulations in 2023, and 4950 simulations in 2024, each requiring approximately three hours of runtime. The scale of this effort demanded a highly scalable cloud solution.\nGAMS Engine SaaS : GAMS Engine SaaS provided the necessary computational power and flexibility for the task. Horizontal scalability and custom backends allowed Rhodium to run 1200 simulations simultaneously and analyse their data in the cloud, significantly improving efficiency and performance.\nHigh Uptime: Despite the high computational demands, GAMS Engine SaaS maintained high reliability and uptime, demonstrating the robustness of its fundamental design and infrastructure.\nCost Efficiency: The pay-as-you-use licensing model of GAMS Engine SaaS allows users to avoid substantial investments in hardware and maintenance, providing a cost-effective solution for organizations with varying computational needs.\nConclusion The use of GAMS Engine SaaS by Rhodium Group exemplifies how advanced computational tools and scalable cloud solutions can address the complex challenges of climate modeling. This enabled Rhodium Group to produce critical climate projections, and as an added benefit, also drove significant scalability improvements in GAMS Engine SaaSfor all users.\nClimate Change: A Defining Challenge for the Future Climate change stands as one of the most significant challenges of the coming decades, posing threats to ecosystems, economies, and societies all around the world. As global temperatures rise and the impact of climate change becomes more obvious, there’s an urgent need for fast and informed action from policymakers and stakeholders across multiple sectors; and high-quality, robust projections are at the heart of the issue.\nThese projections are essential to develop strategies that can mitigate risks and capitalize on opportunities as we transition into a sustainable future. However, they are also inherently accompanied with uncertainties stemming from various sources:\nEconomic and Population Growth: Future economic conditions and demographic trends significantly influence greenhouse gas emissions and energy consumption, yet they’re quite difficult to predict\nCommodity Prices: Fluctuations in the prices of energy commodities, such as oil and gas, can impact the feasibility and adoption rate of alternative energy sources.\nClean Technology Costs: The costs of developing and deploying clean technologies are subject to rapid change, influenced by technological advancements, policy decisions, and market dynamics.\nRHG-GEM Rhodium\u0026rsquo;s main tool for delivering climate projections is the Global Energy Model (RHG-GEM), which is an advanced adaptation of the U.S. Energy Information Administration\u0026rsquo;s (EIA) World Energy Modeling System (WEPS). This model is designed to provide detailed insights into global energy and climate dynamics by dividing the world into 16 distinct regions and allowing for region-specific analysis. As an output, the RHG-GEM produces projections for energy use, emissions, and temperature changes through the end of the century.\nThese projections are essential for understanding the long-term impacts of current and future energy and climate policies. By simulating various scenarios, RHG-GEM equips stakeholders with the insights needed to anticipate trends and make informed decisions to effectively address climate change.\nA key component of RHG-GEM is the Electric and Emerging Clean Technology module (REEM). Developed using the GAMS-based TIMES model framework, REEM applies a Linear Programming (LP) methodology to perform multiple detailed analyses:\nGeneration Capacity: Estimates the energy production potential across various sources. Fuel Consumption: Evaluates the quantity and types of fuels required to satisfy energy demands. Emissions: Projects greenhouse gas and pollutant emissions from different energy sources. Prices: Forecasts the costs associated with energy production and consumption. Emerging Technologies: Assesses the competitiveness and potential of new technologies to replace existing ones, ensuring the model remains relevant and aligned with the latest advancements In addition to energy and emissions forecasts, RHG-GEM provides comprehensive climate policy projections, mapping the anticipated evolution of climate action in response to political and socioeconomic changes. This capability enables stakeholders to address critical questions such as “What trajectory are we on?” by offering a detailed view of the potential future impacts of current policies and actions.\nWith its sophisticated and dynamic modeling features, RHG-GEM is an essential resource for policymakers, researchers, and industry leaders seeking to understand the complexities of climate change and design effective strategies for a sustainable future.\nTackling Uncertainties - Addressing the Complexities of Climate Modeling In order to develop a reliable climate outlook, Rhodium Group had to account for numerous uncertainties intrinsic to economic model inputs, such as population developments, commodity prices, and technology costs. To manage these uncertainties effectively, they employed Monte Carlo Analysis, a statistical technique that involves running a large number of simulations to capture a range of possible outcomes; all while ensuring thorough and evenly distributed input sampling of the input parameter space using Latin hypercube sampling\nThe scope of the Rhodium Group\u0026rsquo;s efforts is evident in the volume of simulations conducted. A total of 4,725 simulations were performed, each requiring approximately three hours to complete. Running this many simulations on local hardware is impossible for most organizations due to the substantial computational demands. A scalable cloud solution was essential.\nIn late 2022, Rhodium Group partnered with GAMS to address their significant computational needs. Anticipating the growing demand for large-scale computing, GAMS had developed the Engine SaaS to simplify and enhance cloud-based modeling. This tool proved to be the ideal solution.\nGAMS Engine SaaS uses Kubernetes on AWS infrastructure, delivering a robust and scalable system for handling heavy computational workloads, which enabled Rhodium to execute their intricate model code—comprising Python glue code, compiled Fortran binaries, and GAMS code—seamlessly, with up to 1,200 simulations running simultaneously. This high-throughput capacity was critical in meeting the stringent requirements of their climate modeling efforts, providing timely insights ahead of the COP28 summit in Dubai.\nThe simulations present a stark outlook for the planet\u0026rsquo;s climate: they project that the global temperature rise is likely to exceed the crucial 2℃ threshold, emphasizing the urgent need for intensified climate action. For further details and analysis, Rhodium offers comprehensive articles and reports through the following links:\nThe Rhodium Climate Outlook Technical Appendix Resulting Improvements to Engine SaaS Working with a customer like Rhodium, that required exceptionally high compute demands, drove significant enhancements to Engine SaaS. Prior to this collaboration, we were well-prepared in terms of information security and vertical scalability. However, Rhodium\u0026rsquo;s extensive parallel computational requirements for Monte Carlo analysis highlighted challenges in horizontal scaling. For example, we repeatedly saturated all z1d.2xlarge instances available in the AWS US-East-1a availability zone.\nTo address this issue, we integrated AWS resources across multiple availability zones into our compute cluster, successfully overcoming the resource constraints. This advancement significantly improved our system’s ability to handle large-scale simulations, benefiting all our customers.\nAnother forthcoming improvement was the introduction of custom data backends. With GAMS Engine, job results are typically transferred via the REST API, but in Rhodium’s case, the high volume of data created a bottleneck. Our development team introduced custom data backends, enabling Rhodium to avoid transferring large datasets to and from the cloud by utilizing S3 buckets. This enhancement facilitates post-processing directly within AWS, which reduces data transfer times and increases efficiency.\nGAMS Engine SaaS demonstrated remarkable reliability during the heightened demand in 2023, achieving an impressive uptime of 99.961%, as verified by an external monitoring tool. This performance underscores the strength of our infrastructure and our dedication to delivering dependable services.\nAdvantages of Cloud Based Optimization with Engine SaaS GAMS Engine SaaS provides a straightforward licensing model based on a pay-as-you-go approach, ensuring customers are charged solely for the hardware resources consumed by each job.\nJob-Based Pricing: Costs are determined by the specific hardware used for each job. Cost Efficiency: Optimize expenses by running smaller models on smaller instances, while accessing high-performance systems for more demanding tasks when necessary. A significant advantage of GAMS Engine SaaS is the elimination of large upfront investments in infrastructure and ongoing maintenance. This is particularly advantageous for organizations with variable computational needs. For example, the Rhodium project would have required over $1 million in investments to run 1200 parallel simulations on pre-configured AWS Outpost racks—an impractical solution for most users.\nWith GAMS Engine SaaS, users can scale computational resources on demand, paying only for what they use while avoiding the complexities and costs of maintaining hardware infrastructure. This approach offers flexibility and cost-effectiveness, making high-performance computing accessible to a broader range of users and applications.\nAdditionally, the flexible REST API enables users to run any GAMS job in the cloud with minimal programming effort.\n","excerpt":"Rhodium Group is a renowned research organization specializing in comprehensive analysis of global trends. With a strong focus on energy, climate, and economic issues, they provide critical insights that inform decision-making at the highest levels. Tasked with delivering a detailed climate outlook ahead of the COP28 summit in Dubai, and then again ahead of COP29 in Azerbaijan, Rhodium Group aimed to account for the numerous uncertainties that could influence future climate trajectories. Their approach involved leveraging advanced modeling techniques and robust computational resources to generate reliable projections.","ref":"/stories/rhodium/","title":"The Rhodium Climate Outlook"},{"body":" Brief introduction to mathematical optimization Mathematical optimization is at the heart of modern decision making. By representing real-world challenges as mathematical models, it offers a systematic and data-driven approach to solving complex problems. It provides a structured way of balancing competing objectives and constraints, which enables us to find the most efficient or cost-effective solutions in any scenario.\nThe beauty of mathematical optimization also lies in its versatility. It’s used in areas as diverse as scheduling airline crews, improving production lines, managing financial risk, handling energy systems, and even designing smart cities. When we optimize resource allocation, reduce costs, and enhance performance, we often uncover solutions that might not be intuitive, which has revolutionized multiple industries.\nWhat is GAMS and what are the advantages of Set Based Declarative Algebraic Modeling The General Algebraic Modeling System (GAMS) is a high-level programming environment specifically designed for creating optimization models. It’s a language that resembles classic mathematical notation, and it allows us to formulate problems in a simple and intuitive way.\nGAMS’s distinguishing feature is its set-based declarative algebraic modeling framework, which emphasizes defining what the problem is rather than prescribing how to compute it. This particular approach isn’t just a technicality, it has several advantages:\nSymbolic and Data-Independent Models: In GAMS, you build pure mathematical models. You construct the algebraic relationship between your variables, parameters, and functions that represent your real-world problem –independently of any one particular dataset. This means that your resulting model is completely symbolic. Then, it’s just a matter of choosing what data to feed your model with, and it will rapidly accommodate without modifications.\nSimple and Compact Syntax: Because of its declarative style, GAMS models are particularly clean and concise. You can encapsulate decision variables, constraints and objectives in a minimalistic and simple code. This makes your models easy to write, edit, and understand –which is essential when it comes to large-scale projects.\nSeamless Solver Integration: One of the biggest advantages of GAMS is that you aren’t tied to one solver. Rather, you have the possibility to choose from a wide range of state-of-the-art options, such as Baron, Xpress, Gurobi, and many others. Switching between solvers is as easy as changing a single line of code. And because GAMS is a reseller of commercial solvers with very competitive prices, you have all this access from a single environment. This way, you’re fully equipped to solve very different types of problems, such as linear programming (LP), nonlinear programming (NLP), and mixed-integer programming (MIP).\nGAMSPy and Optimizing in Python We all know Python: it’s one of the most popular programming languages out there –it’s clean, easy to learn, has tons of great libraries and features, and has been widely embraced by both academia and industry. A tool that lets you develop your models in Python with a set-based syntax was an obvious and natural step, which happened last year with the introduction of GAMSPy. GAMSPy allows you to write and solve optimization problems directly in Python. It’s a tool that combines the powerful GAMS execution system, with the versatile Python language.\nNow, GAMSPy isn’t the first optimization tool to embrace Python. Being able to streamline your whole optimization pipeline in a single environment is useful and convenient, and others like Pyomo or GurobiPy, just to name a few, are powerful tools that have been around for some time.\nSo how does optimizing in Python work? Some solutions have a simple approach: they generate a mathematical model in Python, and then export it as a file that a solver can read. Others build and solve the model directly in Python as an in-memory object. These approaches work, and you can usually find an intuitive solution to get your model running. But, more often than not, they can also lead to significant performance bottlenecks.\nNaturally, if you’re willing to put the time and effort to really get into the code, you can develop good solutions. For example, it’s common to develop strategies such as power calls to a C++ backend to create sums or lookups in sparse data sets. These speedups can deliver great performance. But a simple fact remains: the model is still built constraint-by-constraint in Python, which is an inherently slow language.\nGAMSPy takes a radically different approach.\nInstead of generating the model instances in Python, GAMSPy uses Python mainly as a translator. The model is written in Python, yes, but the actual work is completely offloaded to GAMS. This means that you can write your model in a comfortable, familiar Pythonic way, but leave the job of creating and solving the model instances to GAMS’s powerful and highly optimized backend.\nThink of it as building the blueprint in Python but letting GAMS handle the construction.\nThis difference in approach is critical. By simply generating a .gms file and using GAMS’ optimized machinery to handle the execution, GAMSPy ensures that even the most complex models are generated quickly and efficiently. In simple terms, it’s a method that combines Python with the advantages of set based declarative algebraic modeling and the raw computational power of GAMS.\nBut why is this difference in approach relevant?\nGAMSPy performance There are many things to take into account when deciding which optimization tool to choose. And performance is definitely high on that list.\nWith decades of optimization, GAMS’ performance when it comes to generating mathematical models is among the best out there. So how does GAMSPy, our newest release, perform?\nWell, first of all it’s important to understand something. While GAMSPy has the advantage of leveraging GAMS\u0026rsquo; backend, there’s still a small tradeoff: a communication overhead. This overhead is a natural result of the way GAMSPy and GAMS interact with each other. Every time you execute a command in GAMSPy, Python must communicate this instruction to GAMS, wait for GAMS to process it, and retrieve the result back into Python.\nThis overhead can be most noticeable in small models, simply because there is a fixed cost to it. However, as model size grows and complexity increases, it becomes less significant. Basically, the heavy lifting done by GAMS overshadows the delays caused by the data transfer.\nWhen we compare GAMSPy performance against GAMS on our full model library (including solving time), we get an average overhead of 27% in our Linux system, and 8% on our Windows system –where GAMS itself takes over most of the total time.\nTo get a different perspective, we can also compare the time it takes both GAMSPy and GAMS to generate our IJKLM test model (without solving time):\nFigure 1. Model generation time comparison between GAMSPy and GAMS\nAs the graphic shows, the overhead here is between 30% and 50% for large models.\nNow, this might sound significant, but actually it’s not. For starters, we are only measuring model generation time here. This means we are not taking solving time into account –which usually takes up most of the total time when it comes to larger models. And second, even if we take this overhead into consideration, how does GAMSPy measure against other optimization tools?\nWell, making comparisons with other frameworks and solutions is never straightforward. Defining a set that truly does justice to the test, for example, is a challenge in itself. However, including GAMSPy in our IJKLM test , quickly shows its impressive performance.\nFigure 2. Model generation time comparison between GAMSPy, GAMS, and high-performing implementations of JuMP, Pyomo, and GurobiPy\nAs the graphic shows, when paired against other tools –even against our optimized versions of them– GAMSPy delivers a performance that puts it at the top of the table. It’s almost as fast as GAMS.\nWhat’s more, it’s important to point out that while other Python optimization tools can rely heavily on improved coding to enhance performance, GAMSPy’s strategy is fundamentally different. Shifting the workload to GAMS not only boosts the speed of model generation, it also keeps your code simple and clear –something that can be just as valuable in large and complex models.\nAnd that’s not all. If you do like getting into the bolts and nuts of the code, Python offers endless possibilities to enhance GAMSPy’s (as other frameworks’) performance. Being able to use your favorite libraries for data pre and post-processing can speed up your working time significantly. That’s the beauty of Python and having your whole pipeline in a single environment, and we’ll cover more about this in another blog post.\nConclusion and further information Mathematical optimization plays an important role in solving today’s real-world problems. And tools like GAMS, with its set declarative algebraic framework, simplify the formulation of these problems while ensuring compatibility with multiple solvers. GAMSPy simply takes this approach into the Python ecosystem.\nGAMSPy’s unique method to optimization modeling is a big step forward in the world of Python-based tools. By offloading the main computational tasks to GAMS –which is integrated into GAMSPy and doesn’t require a separate installation–, it can achieve a level of performance that is rarely seen in other frameworks. With GAMSPy, you get to use Python to write simple, minimalistic, and easy to follow models, all while ensuring top-notch performance. Just take a look at our GAMSPy model library –you can explore dozens of examples ready to run.\nFor those in academia, GAMS also offers a unique opportunity to use these tools through its Academic Program . If you’re a student, professor, or researcher, you have access to our technology for free or at highly discounted prices. Our goal is to make advanced modeling and solving techniques accessible to anyone in an educational or research context.\nIn summary, whether you’re a beginner exploring optimization or an expert seeking efficient solutions, GAMSPy provides the simplicity and raw power you need to write and solve your models. So go ahead and give it a try, all it takes to get started is a simple line:\npip install gamspy ","excerpt":"With GAMSPy, you can build your Python optimization models in record time. Learn how offloading computational tasks to GAMS, and leveraging set-based declarative algebraic modeling leads to efficient and high-performance optimization.","ref":"/blog/2024/12/gamspy-high-performance-optimization-in-python/","title":"GAMSPy: High-Performance Optimization in Python"},{"body":" GAMS Secures BMBF Funding for \u0026ldquo;QuSol\u0026rdquo; Project together with partners at KIT, FUB, RUB and Infineon We are excited to announce that GAMS, in collaboration with Karlsruhe Institute of Technology - KASTEL (KIT-KASTEL), Infineon Technologies AG, Ruhr-Universität Bochum (RUB), and the Freie Universität Berlin (FUB), has been awarded funding from the German Federal Ministry of Education and Research (BMBF) under the \u0026ldquo;Anwendungsorientierte Quanteninformatik\u0026rdquo; initiative. The project, known as \u0026ldquo;QuSol\u0026rdquo; (Quantum Optimization Solver Kit), focuses on exploring how quantum computing could enhance optimization problems that are central to modern industries, such as production planning and logistics. While the field of quantum computing is still in its early stages, the goal is to advance both theoretical and practical understanding of how quantum algorithms can be applied to solve complex optimization challenges.\nAcademic Partners\nProf. Dr. Ina Schaefer, Karlsruhe Institute of Technology (KIT-KASTEL) Prof. Dr. Stefan Nickel , Karlsruhe Institute of Technology (KIT-IOR) Prof. Dr. Michael Walter , Ruhr-Universität Bochum Prof. Dr. Jens Eisert , Freie Universität Berlin Industry Partners\nInfineon Technologies AG GAMS Software GmbH Associate Partner\nProf. Dr. Tobias Osborne , Leibniz Universität Hannover Addressing Classical Limitations with Quantum Solutions Quantum computers promise to solve problems that classical computers cannot handle efficiently. This potential has spurred significant interest in physics and quantum informatics, with recent advancements enabling the development of mid-scale “Noisy Intermediate-Scale Quantum” (NISQ) computers. The critical question now is identifying key applications where quantum computing can deliver significant advantages—particularly in economically relevant fields like optimization.\nOptimization is at the forefront of potential quantum computing applications, as nearly every problem in modern supply chain planning can be framed as an optimization challenge. However, despite the theoretical promise, few concrete findings currently substantiate the expectation that quantum computers will outperform classical methods in practical settings. This gap applies to both variational quantum algorithms, which can already be implemented on near-term NISQ (Noisy Intermediate-Scale Quantum) hardware, and scalable quantum algorithms, designed for future error-tolerant quantum computers.\nThe QuSol Project: Pioneering Quantum Optimization The QuSol project brings together leading experts to achieve a disruptive breakthrough in what quantum computers can accomplish in the field of optimization. By focusing on concrete, economically critical use cases in production planning, the project seeks to substantially expand the algorithmic toolbox for optimization using quantum computers, ultimately making quantum computing a practical tool for solving real-world problems.\nQuSol’s ambition goes beyond addressing isolated problems. It aims to develop generic hybrid solution methods that can tackle complex optimization problems with uncertainties. By incorporating quantum algorithms, these methods will provide more efficient solutions than classical approaches alone. The project will create a reusable, adaptable open-source software package that allows the broader community to apply and further develop these quantum-powered optimization tools across various disciplines.\nImpact on Optimization and Beyond The project will build on previous initiatives but with a broader and more ambitious scope. Prior projects have focused on specific, limited use cases, while QuSol will offer a comprehensive, quantum-enhanced optimization toolkit applicable to a wide array of complex problems. This toolkit will accelerate optimization processes not only in production planning but across industries facing uncertainty and high complexity in their operations.\nThe QuSol project is driven by highly relevant real-world applications in modern production planning, where the current global situation and associated uncertainties make optimization more challenging and essential than ever. Well-defined, representative use cases from experts in classical and quantum optimization will be analyzed and broken down into subproblems that can be solved more efficiently with variational and/or scalable quantum algorithms. The project will not only adapt and extend existing quantum algorithms but also develop new ones, assessing their applicability and advantages for different problem instances.\n©Infineon Technologies AG, 2024\nFig 1: Relevant example for a complex optimization problem in production planning. In the capacity-demand match (also known as master planning), a demand predicted by the demand planning side is set against constraints (called bottlenecks) set by the capacity planning side. The aim is to determine production targets that satisfy the capacity constraints and can therefore be achieved later in a detailed production plan. These targets should be matched as closely as possible to demand and are used in the form of available-to-promise (ATP) quantities to confirm customer orders at a later date.\nBuilding the Future of Quantum-Enhanced Optimization QuSol’s results will be integrated into a state-of-the-art, open-source software package, enabling a wide user community to take advantage of the advancements. This software will serve as a key building block in the evolving quantum ecosystem, allowing for further innovation and practical application of quantum optimization across disciplines. The project’s outcomes will bring significant practical benefits by addressing a broad spectrum of optimization challenges.\nWe are thrilled to contribute to this pioneering initiative, which promises to reshape the future of optimization technology and bring quantum computing closer to real-world applications.\n","excerpt":"GAMS, alongside KIT-KASTEL, Infineon, RUB, and FUB, has received BMBF funding for the QuSol project to explore quantum computing for solving industrial optimization challenges. We are thrilled to contribute to this pioneering initiative, which promises to reshape the future of optimization technology and bring quantum computing closer to real-world applications.","ref":"/blog/2024/11/qusol-project-funded/","title":"QuSol Project Funded"},{"body":"GAMS at INFORMS Annual Meeting 2024 in Seattle: A Recap GAMS was happy to be part of this year\u0026rsquo;s INFORMS Annual meeting, sharing insights and connecting with the optimization community.\nWe had a fantastic time meeting so many enthusiastic professionals and students, exchanging ideas on how GAMS technology can simplify complex modeling tasks and enhance project workflows. A big shoutout to everyone who stopped by our booth, took a shot at the basketball hoop, and inspiring conversations!\nThank you for making this event memorable, and we look forward to many more exciting collaborations!\nThis year, GAMS was excited to participate with a technical showcase, an engaging workshop, and an interactive booth experience that provided plenty of opportunities for knowledge sharing, networking, and a bit of friendly competition!\nOur time at INFORMS 2024 was invaluable, allowing us to interact with users, introduce the latest in GAMS technology, and engage with the vibrant optimization community. We’re happy to have shared our advancements in GAMSPy and data integration and look forward to seeing how attendees apply these capabilities to their projects.\nIf you missed us in Seattle, stay tuned for more updates, where we’ll continue to dive deeper into how GAMS can support your optimization journey!\nSign up for our general information newsletter to stay up-to-date! Our Abstracts Our GAMS Exhibitor Workshop:\nGAMSPy and Data APIs for streamlining optimization by Atharv Bhosekar and Adam Christensen\nGAMS (General Algebraic Modeling System) is an algebraic modeling language that provides users a way to write optimization models using intuitive algebraic syntax. However, as optimization becomes an integrated step within larger decision-making pipelines, modelers face two significant challenges: (1) the inconvenience of switching out of a preferred programming language (such as Python) solely for optimization purposes, and (2) the difficulty of efficiently transferring data between GAMS and other tools and platforms within a diverse software ecosystem.\nIn this presentation, we will tackle these challenges using our latest solutions. First, we will present GAMSPy, our new product that brings algebraic modeling capabilities to Python. GAMSPy allows users to enjoy an intuitive algebraic syntax without compromising on the performance. We will also highlight our suite of data APIs to streamline data exchange with GAMS. In particular, we will focus on GAMS Transfer, a data API that enables users of R, MATLAB, and Python to efficiently read, modify, analyze, and write GAMS data.These tools significantly enhance the interoperability of GAMS within multi-platform decision pipelines, facilitating smoother and more efficient optimization workflows.\nOur GAMS Technology Showcase:\nOptimization pipeline design: from data curation to algebraic modeling with GAMSPy by Atharv Bhosekar and Adam Christensen\nAlgebraic modeling languages (AMLs) have been a cornerstone in the fields of optimization and economics. These tools are popular because they are able to effortlessly link the worlds of algebra and computer science \u0026ndash; that is, the syntax of the AML closely approximates that of handwritten algebra but its execution is automated and scalable. AMLs, by design, are not general purpose programming languages; as a result, it can be difficult to gather, clean and prepare data for a modeling environment. Recent years have seen sophisticated data science tools enter the mainstream. Languages such as Python and R can leverage Numpy/Pandas and Shiny/Tidyverse/Dplyr to efficiently work with large data in deployable environments. Modern infrastructure tools such as Docker and Kubernetes make it possible to isolate workflows and scale compute resources via cloud platforms. All of these compute resources mean that data assets are arriving at optimization model instances from an ever diversifying number of start points. In this workshop we present a Python package called GAMSPy that leverages modern data science tools with the flexible nature of Python to construct a powerful Python-AML. This presentation will cover a number of real-world inspired examples that illustrate how to bring data into an environment and effectively clean it for use in an optimization model.\nCheck our presentation slides for more information:\nName: Size / byte: Exhibitor Workshop_AM.pptx 11039958 Technology Showcase_AM.pptx 11691753 ","excerpt":"GAMS was happy to be part of this years INFORMS Annual meeting, sharing insights and connecting with the optimization community. We had a fantastic time meeting so many enthusiastic professionals and students, exchanging ideas on how GAMS technology can simplify complex modeling tasks and enhance project workflows.","ref":"/blog/2024/10/informs-annual-meeting-in-seattle/","title":"INFORMS Annual Meeting in Seattle"},{"body":"General Licensing Information GAMS and GAMSPy require a valid license to operate. Without a valid license, the system produces the following error message:\n*** No license specified and no gamslice.txt found in standard locations. For more technical details on how to install a license, refer to the GAMS Technical Documentation or the GAMSPy Installation Guide .\nFree Licenses Demo License A free demo license, valid for 5 months, is included with every GAMS and GAMSPy distribution. It is primarily intended for the initial exploration of GAMS or GAMSPy and allows solving small models with up to 2,000 variables and 2,000 constraints for linear (LP, RMIP, and MIP) models, and 1,000 variables and 1,000 constraints for all other model types. Demo licenses are not intended for commercial or production use. They are time-limited and require a GAMS or GAMSPy version not older than 18 months. Upon expiration, users must either update their GAMS version or purchase a professional license.\nUsage beyond the demo license limits results in the following error:\n*** Status: Terminated due to a licensing error. Community License The community license allows users to generate and solve linear models (LP, MIP, RMIP) with up to 5,000 variables and constraints. For other model types, the limits are 2,500 variables and constraints. Community licenses are restricted to non-commercial, non-production use by academic users, and expire after 12 months.\nYou can register and generate a community license in our academic portal .\nGAMSPy Academic License GAMSPy academic licenses include access to full versions of a range of commercial solvers, including CPLEX, Conopt, Xpress/Global, Mosek, COPT, and PATH, intended for academic purposes only. Also included are many free solvers such as HiGHS, SCIP, SHOT, and IPOPT.\nYou can register and generate your free GAMSPy license in our academic portal .\nProfessional and Evaluation Licenses Evaluation License Non-academic users can request a time-limited evaluation license to test GAMS or GAMSPy without any functional restrictions for up to 30 days. Evaluation licenses are meant for testing under real-world conditions and are not permitted for commercial or production work. Contact sales@gams.com to request an evaluation license.\nProfessional License Professional licenses enable usage beyond the demo and community licenses, including unrestricted access to GAMS solvers without model size limitations. Licenses can be purchased with a one-time payment plus an annual maintenance \u0026amp; support fee, or as a subscription model with ongoing yearly payments. Professional licenses for academic users are available at discounted rates for research and teaching purposes.\nContact sales@gams.com to request a free quotation and detailed purchasing information for a professional license.\nFree licenses for certain solvers are also available under our academic program (see GAMS Academic Program ).\nBenchmarking Restrictions Academic users with free commercial solver licenses are not allowed to publish or distribute benchmarking results of any commercial solvers included in their GAMS license without direct written consent from GAMS. This restriction is in compliance with requirements from commercial solver vendors.\n","excerpt":"\u003ch2 id=\"general-licensing-information\"\u003eGeneral Licensing Information\u003c/h2\u003e\n\u003cp\u003eGAMS and GAMSPy require a valid license to operate. Without a valid license, the system produces the following error message:\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" style=\"background-color:#f0f3f3;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"\u003e\u003ccode class=\"language-plaintext\" data-lang=\"plaintext\"\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e*** No license specified and no gamslice.txt found in standard locations.\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003eFor more technical details on how to install a license, refer to the \u003ca href=\"/latest/docs/UG_License.html\" target=\"_blank\"\u003eGAMS Technical Documentation\u003c/a\u003e\n or the \u003ca href=\"https://gamspy.readthedocs.io/en/latest/user/installation.html\" target=\"_blank\"\u003eGAMSPy Installation Guide\u003c/a\u003e\n.\u003c/p\u003e","ref":"/sales/licensing/","title":"Licensing"},{"body":"How to buy GAMS ","excerpt":"\u003ch1 id=\"how-to-buy-gams\"\u003eHow to buy GAMS\u003c/h1\u003e","ref":"/sales/","title":"Sales"},{"body":"A Recap of GOR Annual Meeting 2024 in Munich The annual conference of the Society for Operations Research (GOR e.V.) was held in Munich from September 3-6, 2024, and hosted by the Technical University of Munich. This year’s theme: \u0026ldquo;Data, Learning, and Optimization,\u0026rdquo; brought together experts from all around the world to explore the latest advancements in operations research.\nGAMS sent a large team this year, eager to connect with colleagues, exchange new ideas, and dive into the conference’s wide range of topics. On the technical side, our team gave three presentations and got to engage in many thought-provoking discussions with colleagues, field experts, and GAMS users. Outside the sessions, the conference social program offered a great opportunity to network in a more informal setting. Highlights included the Bavarian reception at the Augustiner Bräustuben and the conference dinner.\nTeam Presentations We are excited to share the abstracts and presentations from our team’s talks, each offering unique perspectives on GAMSPy, artificial intelligence, and decision support as well as analytics.\nPresentation 1 - GAMSPy - Where Convenience of Python Meets GAMS’ Performance by Muhammet Abdullah Soyturk Presentation 2 - The Lifecycle of OR Solutions: From Rapid Prototypes to Market Deployment by Justine Broihan Presentation 3 - Integrating Machine Learning with GAMSPy by Hamdi Burak Usul Each of our presentations reflected our team’s commitment to addressing the most pressing challenges and new solutions in the field. We were proud of contributing to the conversation and sharing our work with a forward-thinking audience.\nA Big Thank You! We would like to extend our heartfelt thanks to the organizers, speakers, and participants who made this year’s GOR meeting a memorable experience. It’s always a pleasure to be part of such a well-organized event that promotes collaboration, learning, and innovation.\nThe GOR conference continues to be a significant event in the operations research community, and we are already looking forward to next year’s gathering. Until then, let\u0026rsquo;s keep the spirit of collaboration alive and continue working towards impactful solutions. See you next year!\nThe abstracts:\nGAMSPy - Where Convenience of Python Meets GAMS’ Performance by Muhammet Abdullah Soyturk\nOptimIzation pipelines contain many tasks such as mathematical modeling, data processing, and developing algorithms. Python and its vast array of packages provide a convenient way of data gathering, pre/post-processing of the data, the visualization of the data and developing necessary algorithms by utilizing existing ones. On the other hand, GAMS has been providing tools with great performance for the mathematical modeling part for decades. In this talk, we will talk about a new tool GAMSPy that aims to combine the best of both worlds.\nThe Lifecycle of OR Solutions: From Rapid Prototypes to Market Deployment by Justine Broihan\nThis presentation delves into the transformative process of turning rapid prototypes into market-ready operations research (OR) applications, drawing from a variety of real-world projects. We focus on the methodical transition of prototypes to fully developed solutions, addressing recurring challenges and strategic solutions along the way. We will explore two main areas: implementing effective OR solutions that meet dynamic market needs, and extracting insights crucial for our product development. This approach shapes a toolkit that is robust and adaptable to evolving technologies. A significant part of our discussion will highlight the importance of rapid prototyping. This agile phase fosters informed discussions with clients, the end-users, guiding the refinement of prototypes to better meet their needs and expectations. Furthermore, transitioning from a prototype to a mature OR application is a comprehensive development process. It includes enhancing user interfaces, optimizing deployment strategies (like GUI and cloud computing), and ensuring superior computational performance. By sharing our experiences and best practices, this talk aims to provide participants with strategies to overcome common obstacles in OR project development. Attendees will gain a deeper understanding of how to effectively move from conceptual prototypes to advanced, market-ready applications, aligning with user needs and achieving operational efficiencies.\nIntegrating Machine Learning with GAMSPy by Hamdi Burak Usul\nGAMSPy seamlessly combines Python\u0026rsquo;s flexibility with the modeling prowess of GAMS. This combination offers promising avenues particularly in merging the realms of machine learning (ML) and mathematical modeling. While GAMS is proficient in indexed algebra, ML predominantly relies on matrix operations. To facilitate ML applications, our research focuses on incorporating commonly used ML operations into GAMSPy. In our presentation, we illustrate the practical implications by demonstrating the generation of adversarial images for an optical character recognition network using GAMSPy. We demonstrate the adaptability of GAMSPy and its potential utility in ML research and development endeavors. Furthermore, we explore future directions, including planned OMLT integration, highlight distinctions between GAMSPy\u0026rsquo;s approach and existing alternatives.\nCheck our presentation slides for more information:\nName: Size / byte: GAMSPy.pdf 269205 The Lifecycle of OR Solutions_ From Rapid Prototypes to Market Deployment.pdf 6150261 ml-in-optimization.pdf 1450514 ","excerpt":"This year\u0026rsquo;s annual conference of the Operations Research Society of Germany (GOR e.V.) was held in Munich from September 3 to September 6. It focused on \u0026ldquo;Data, Learning, and Optimization\u0026rdquo; and featured presentations, discussions, and networking opportunities, including social events like a Bavarian reception and conference dinner.","ref":"/blog/2024/09/gams-at-the-or2024-in-munich/","title":"GAMS at the OR2024 in Munich"},{"body":" GAMSPy combines the high-performance GAMS execution system with the flexible Python language, creating a powerful mathematical optimization package. It acts as a bridge between the expressive Python language and the robust GAMS system, allowing you to create complex mathematical models effortlessly. With GAMSPy we introduce a new way to streamline the complete optimization pipeline starting with data input and preprocessing followed by the implementation of the mathematical model and data postprocessing and visualization, in a single, intuitive Python environment. GAMSPy allows you to leverage your favorite Python libraries (e.g. Numpy, Pandas, Networkx) to comfortably manipulate and visualize data. And it allows to import and export data and optimization results to many data formats. On top, GAMSPy seamlessly works with GAMS MIRO and GAMS Engine which allows you to run your GAMSPy optimization either on your local machine or on your own server hardware (GAMS Engine One) as well as on GAMS Engine SaaS, where you don’t even need to run a server. We make sure you have access to the right resources, any time. Advantages of using GAMSPy GAMSPy opens up entirely new opportunities to streamline optimization and data pipelines: Mathematical Modeling in Python Write complex mathematical models directly in Python. Create robust, readable, and maintainable mathematical models, preserving the essence of algebraic notation. Leverage the power of separating instance data and model notation. Uncompromised Performance Solver Independence: Choose and integrate different solvers based on your specific requirement. No loss in performance: GAMSPy offloads the heavy lifting, to the efficient GAMS backend. Sparsity Handling: Let GAMSPy take care of handling sparse data cubes and focus on the formulation of the model. Seamless Pipeline Management No switching of environments: Manage data preprocessing and optimization tasks within a single, intuitive environment. Leverage Python libraries to comfortably manipulate and visualize data. Import and export data and optimization results to many data formats. Leverage the seamless integration of GAMS MIRO, and GAMS Engine. Getting Started Read the GAMSPy documentation with instructions on how to get started. Use the GAMSPy section of our forum for questions and support. Transport Example This classic scenario involves managing supplies from various plants to meet demands at multiple markets for a single commodity. Nurses example The NURSES problem involves managing the assignment of nurses to shifts in a hospital. Nurses must be assigned to hospital shifts in accordance with various staffing constraints. Pickstock Example The goal is to pick a small subset of stocks together with some weights, such that this portfolio has a similar behavior to our overall Dow Jones index. Click below and check out the brand new GAMSPy course by our partner Bluebird Optimization: ","excerpt":"\u003csection\u003e\n \u003cdiv class=\"full-width\"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid\"\u003e\n \u003cdiv class=\"container\"\u003e\n \u003ch1 class=\"display-4\"\u003e\n \u003cimg\n src=\"/img/gamspy_logo_transp.png\"\n alt=\"GAMSPy\"\n height=\"80em\"\n /\u003e\n \u003c/h1\u003e\n\n \u003cp class=\"lead\"\u003e\n GAMSPy combines the high-performance GAMS execution system\n with the flexible Python language, creating a powerful\n mathematical optimization package. It acts as a bridge\n between the expressive Python language and the robust GAMS\n system, allowing you to create complex mathematical models\n effortlessly.\n \u003c/p\u003e\n\n \u003chr /\u003e\n\n \u003cp class=\"lead\"\u003e\n With GAMSPy we introduce a new way to streamline the\n \u003cstrong\u003ecomplete optimization pipeline\u003c/strong\u003e starting\n with data input and preprocessing followed by the\n implementation of the mathematical model and data\n postprocessing and visualization, in a single, intuitive\n Python environment. GAMSPy allows you to leverage your\n favorite Python libraries (e.g. Numpy, Pandas, Networkx) to\n comfortably manipulate and visualize data. And it allows to\n import and export data and optimization results to many data\n formats.\n \u003c/p\u003e","ref":"/sales/gamspy_facts/","title":"GAMSPy Facts"},{"body":"We’re excited to share some highlights from the EURO Conference in Copenhagen! This year\u0026rsquo;s event was fantastic, offering great opportunities to connect with the academic elite and industry leaders, share insights, and showcase our latest innovations.\nOur booth was buzzing with activity throughout the conference. It was wonderful to meet so many enthusiastic attendees, engage in meaningful conversations, and demonstrate our latest products and services. The positive feedback was truly inspiring, reaffirming our commitment to innovation.\nOur team delivered three successful talks, each drawing an interested audience and sparking lively discussions. We covered topics like the Integration of Python in GAMS, Machine Learning and GAMSPy, and Engine SaaS. The response was very positive, with attendees appreciating the depth of knowledge and practical applications discussed. We’re grateful for the opportunity to contribute to our industry\u0026rsquo;s collective learning and advancement.\nReflecting on our time in Copenhagen, we\u0026rsquo;re excited for the future. The insights and connections made at the EURO Conference will undoubtedly propel us forward as we continue to innovate and lead in our field.\nThank you to everyone who visited our booth, attended our talks, and engaged with us throughout the conference. We look forward to seeing you at future events and continuing the conversation!\nStay tuned for more updates and innovations from our team. Until next time!\nSign up for our general information newsletter to stay up-to-date! Our Abstracts GAMSPy: The Best of Both Worlds - Integrating Python and GAMS By Justine Broihan\nOptimization applications combine technology and expertise from many different areas, including model-building, algorithms, and data-handling. Often, the gathering, pre/post-processing, and visualization of the data is done by a diverse organization-spanning group that shares a common bond: their skill in and appreciation for Python and the vast array of available packages it provides. For this reason, GAMS offers a new comfortable way to integrate with Python on the data-handling and modeling side. In this talk, we will explore the benefits of our Python library GAMSPy.\nIntegrating Machine Learning with GAMSPy By Hamdi Burak Usul\nGAMSPy is a powerful mathematical optimization package which integrates Python\u0026rsquo;s flexibility with GAMS\u0026rsquo;s modeling performance. This combination opens doors to previously challenging applications, notably in bridging the worlds of machine learning (ML) and mathematical modeling. While GAMS excels in indexed algebra, ML predominantly relies on matrix operations. To enable applications in ML, our work introduces essential ML operations such as matrix multiplications, transpositions, and norms into GAMSPy. In this talk, we showcase the use of these additions by generating adversarial images for an optical character recognition network using GAMSPy. We highlight GAMSPy\u0026rsquo;s versatility and its potential to be used in ML research and development. We delve into future prospects, show how GAMSPy\u0026rsquo;s approach differs from existing alternatives and discuss innovative methods where mathematical modeling intersects with machine learning.\nGAMS Engine SaaS: A Cloud-Based Solution for Large-Scale Optimization Problems By Frederik Proske\nGAMS Engine SaaS is a cloud-based service that allows users to run GAMS jobs on a scalable and flexible infrastructure, currently provided by Amazon Web Services (AWS). It was launched in early 2022 and has since attracted a variety of customers who benefit from its features, such as horizontal auto-scaling, instance sizing, zero maintenance, and simplified license handling. GAMS Engine SaaS is especially suitable for workloads that require large amounts of compute power and can be adapted to many different scenarios. In this presentation, we show a case study of a large international consultant agency that uses GAMS Engine SaaS to run Monte-Carlo simulations of a large energy system model in response to varying climate change scenarios. We describe how they leverage the GAMS Engine API to submit and monitor their jobs, how they select the appropriate instance type for each job, and how they can use custom non-GAMS code on Engine SaaS. We also discuss the challenges and benefits of using GAMS Engine SaaS for this type of application, and provide some insights into the future development of the service.\n","excerpt":"This June GAMS, we were thrilled to attend the EURO 2024 Conference in Copenhagen and meet with colleagues, discuss new mathematical solutions to business problems, and share our latest advancements in the integration of GAMS with Python.","ref":"/blog/2024/07/gams-at-the-euro-2024-in-copenhagen/","title":"GAMS at the EURO 2024 in Copenhagen"},{"body":" This workshop for energy suppliers and project planners is the first in our series of workshops on strategic and tactical planning in the energy industry. Via quantitative decision models, answer questions like\nWhat are the expected optimal costs/revenues – broken down by procurement, production and consumption? What is the annual amount of energy produced by each plant? When exactly does a plant operate and what is the number of annual starts and operating hours? What CO2 emissions are produced by the optimal solution? Which capacity expansion has the greatest impact on the cost/revenue structure? How high is the proportion of green energy? What is the economically and technically optimal size of an energy storage system? What are the accumulated values of investments at the end of the planning horizon? How does the investment in renewable energy contribute to compliance with emissions targets and what impact does this have on the cost/revenue structure? What is the impact of a plant failure? How does it affect overall supply? How would unexpected price peaks in the coming year affect the supply situation and overall costs? How can a minimum share of green energy of x % be ensured? Please note: as of yet, the underlying software is available in German only. Hence, the workshop will be given in German. Please let us know whether you are interested in an English version here: team@enosys.ltd If you would like to be put on the waiting list for our next event, please let us know at team@enosys.ltd Training workshop outline and registration Thank you for your interest!\nKind regards,\nThomas Maindl and the team of ENOSYS Ltd.\nwww.enosys.ltd ","excerpt":"This workshop for energy suppliers and project planners is the first in our series of workshops on strategic and tactical planning in the energy industry.","ref":"/courses/2024_06_enosys/","title":"Energy planning 1: Optimize energy supply and demand networks and evaluate investment decisions with the EIP Energy Investment Planner© and GAMS Engine SaaS"},{"body":"","excerpt":"","ref":"/authors/ameeraus/","title":"Alexander Meeraus"},{"body":"","excerpt":"","ref":"/authors/hlofgren/","title":"Hans Lofgren"},{"body":"It was with great sadness that we and many other colleagues, students, and friends learned about the passing of David Kendrick, Emeritus Professor of Economics at the University of Texas at Austin. He died on April 7, at an age of 86.\nWe had the privilege of interacting with him as a colleague, professor and friend, Alex most intensely during the early days of the development of GAMS at the World Bank and Hans as a graduate student at the University of Texas. David joined its Department of Economics in 1970, after having earned his Ph.D. from MIT in 1966 and taught as Assistant Professor at Harvard’s Economics Department 1966-1970. David was invariably exemplary, both as a distinguished economist, dedicated teacher, and caring mentor, opening his home to visitors and students alike. Throughout his career, his contributions to the field were of high quality, not driven by professional fads but by a desire to enhance and communicate research and tools in areas that he found promising. We will here take the opportunity to describe the parts of his legacy that are most directly related to GAMS community, in the process evoking some of the early intellectual history of GAMS and the people involved.1\nAlready as a graduate student, David entered the field of computational economics. His Ph.D. dissertation at MIT, supervised by Richard S. Eckaus, developed linear and mixed-integer programming (LP and MIP) models for investment planning in process industries with geographically distributed and interdependent production units with economies of scale (Kendrick 1967). At the time, this was a research area that engaged several of the brightest minds of the field; among others, David’s dissertation draws on Alan Manne (Markowitz and Manne 1957; Manne and Markowitz, eds 1963). David’s application of his model to the steel industry in Brazil pointed to the payoffs from further advances in technology, both hardware and software – he reports that, when applying a mixed-integer model that by today’s standards was small (23 integer variables, 433 other variables, and 122 constraints), he was forced to end computations due to limitations on computer time after having found a solution that he considered “good” without necessarily being optimal. Beyond computer capacity, the steps needed to translate his algebraic model and its database into a computer-readable format and receive the results were cumbersome and error-prone.\nDuring his years at Harvard, David was part of the Project for Quantitative Research in Economic Development led by Hollis Chenery, at the time Professor of Economics at Harvard. Among other things, the project produced a volume on development planning (Chenery 1971) that includes a chapter based on a non-linear economywide planning model that David coauthored with Lance Taylor, at the time one of his students, one of the CGE pioneers and later a leading heterodox macroeconomist (Kendrick and Taylor 1971); as indicated in the reference list, that model and several others referred to in this blog are represented in the GAMS library. Other traces of David’s years at Harvard include the volume “Notes and Problems in Microeconomic Theory,” the second edition of which is coauthored with Samuel Bowles and Peter Dixon, one of the students in the course that the first edition was based on (Dixon et al. 1980). As many in the GAMS community know, Peter has moved on to become a luminary in the field of CGE modeling, albeit as part of the GEMPACK community.\nIn 1972, Hollis Chenery became the World Bank’s Chief Economist, a position he held until 1982. He brought with him personal connections and a commitment to draw on quantitative structuralist economics in this institution’s research and operations. At about the same time, David started a 16-year spell of part-time consulting with the World Bank while Alex landed a staff position a few years after having left his native Austria. Alex had also been working on LP models, most recently for General Electric. The task of turning LP and other mathematical programming models into tools that could be built and applied with high efficiency – at a low cost in time and with built-in error checks to maintain high quality – led to GAMS, including the crucial early step of developing the first GAMS compiler, which Alex recalls was thanks to David’s networking with computer scientists. In coming years, their joint work on GAMS frequently brought Alex to Austin, on one occasion appearing in a graduate economics class that Hans attended in the mid-1980s.\nAmong the first public signs of GAMS’ emergence is a technical note with familiar-looking GAMS code that Alex coauthored with Johannes Bisschop in 1979, and a 1980 World Bank monograph by Choksi, Meeraus, and Stoutjesdijk on the fertilizer industry in Egypt (Bisschop and Meeraus 1979; and Choksi et al. 1980). The model by Choksi et al. applies the methodology documented in Kendrick and Stoutjesdijk (1978), which is close to what David developed for his dissertation: linear and mixed-integer programming models with linearization of non-linearities to permit economies of scale. During the next few years, this led to a series of sectoral studies, including oil refining, steel, and fertilizers in a multi-country setting (Kendrick et al. 1981 and 1984; Mennes and Stoutjesdijk 1985). The different pieces reflect David’s concern with style in modeling: models should be presented so that the reader with minimum effort can understand their structure and data, among other things putting brief explanations in words under each part of complex equations, something that he labeled “Manne notation”; not surprisingly good style leads to model presentations that are similar to a well-structured GAMS statement (Kendrick 1984). This paper also includes what may be the first application of a CGE model in GAMS – a miniature ORANI model, linearized in the Johansen tradition and shared by Peter Dixon (1979). A few years later, Condon et al. (1987) were the first to apply GAMS to a non-linear CGE model, drawing on the seminal work by Dervis et al. (1982) and benefitting from improved non-linear solvers. In parallel with the industrial sector work, GAMS was applied to agriculture, including spatial agricultural sector models that simulate equilibria in multiple markets, drawing on a formulation due to Paul Samuelson (1952) that Hans, for his dissertation under David’s supervision, adapted for a regional Egyptian context (Lofgren 1993). Kendrick (1996) surveys the broader area of sectoral economics, referring to much of the research described above, including several other dissertations that, under his supervision, applied GAMS (Adib 1985, Letson 1992, and Linden 1992), benefitting from the fact that, starting from the mid-1980s, he made GAMS available to department students on a mainframe computer and on floppy disks for use on PCs.2\nDavid’s most active period of work on GAMS concluded with the publication of the first edition of the GAMS User’s Guide (Brooke et al. 1988). Up to his retirement in 2014, his courses in computational economics covered the sectoral and economywide model types referred to above, in later years using as their main text the volume “Computational Economics” which he coauthored with Ruben Mercado (a student of his) and Hans M. Amman, for many years his main collaborator (Kendrick et al. 2006).\nDavid is remembered with fondness and appreciation by the two of us and many others who were fortunate to cross his path. He advised his students to stand on the shoulders of the previous generations and their models. Today’s computational economists are well advised to draw on and be inspired by his work in their future endeavors.\nReferences Adib, P. Manouchehri. 1985. An investment planning model of the world petrochemical industry. PhD dissertation, University of Texas, Austin, TX.\nBisschop, Johannes, and Alexander Meeraus. 1979. Selected Aspects of a General Algebraic Modeling Language. Technical Note. Development Research Center, World Bank.\nBrooke, Anthony, Alexander Meeraus, and David Kendrick. 1988. GAMS: A User’s Guide.\nChenery, Hollis B. ed. 1971. Studies in Development Planning. Harvard University Press.\nChoksi, Armeane M., Alexander Meeraus, and Ardy J. Stoutjesdijk. 1980. The Planning of Investment Programs in the Fertilizer Industry. The Johns Hopkins University Press.\nCondon, Timothy, Henrik Dahl, and Shantayanan Devarajan. 1987. Implementing a Computable General Equilibrium Model on GAMS: The Cameroon Model. Development Research Department Discussion Paper DRD 290, World Bank. [GAMS library: cammcp.gms, camcns.gms, camcge.cms, cammge.gms]\nDervis, Kemal, Jaime de Melo, and Sherman Robinson. 1982. General Equilibrium Models for Development Policy. Cambridge University Press.\nDixon, Peter B. 1979. A skeletal version of Orani 78: Theory, data, computations, and results. Preliminary Working Paper No. OP-24 , Impact Research Center, Industrial Assistance Commission, 608 St. Kilda Road, Melbourne, Victoria, 3004, Australia.\nDixon, Peter B., Samuel Bowles, and David Kendrick. 1980. Notes and Problems in Microeconomic Theory. North-Holland.\nKendrick, David A. 1967. Programming Investment in the Process Industries, The MIT Press.\nKendrick, David A. 1984. Style In Multisectoral Modelling. Chapter 15 in Andrew J. Hughes Hallett, ed. Applied Decision Analysis and Economic Behaviour. Martinus Nijhoff Publishers.\nKendrick, David A. 1996. Sectoral Economics, in eds. Hans M. Amman, Hans M., David A Kendrick, and John Rust. Handbook of Computational Economics, Volume 1. Elsevier Science.\nKendrick, David A., Alexander Meeraus, and Jaime Alatorre. 1984. The planning of investment programs in the Steel Industry. The Johns Hopkins University Press. [GAMS library: mexss.gms].\nKendrick, David A. Alexander Meeraus, and Jung Sun Suh. 1981. Oil refinery modeling with the GAMS language. Research Report No. 14. Center for Energy Studies. University of Texas at Austin. [GAMS library: macro.gms].\nKendrick, David A., P. Ruben Mercado, and Hans M. Amman. 2006. Computational Economics, Princeton University Press, Princeton, NJ.\nKendrick, David A., and Ardy J. Stoutjesdijk. 1978. The Planning of Industrial Investment Programs A Methodology. The Johns Hopkins University Press.\nKendrick, David A., and Lance Taylor, ”Numerical methods and Nonlinear Optimizing models for Economic Planning,” in Chenery, ed. 1971. [GAMS library: chakra.gms]\nLetson, David. 1992. “Investment decisions and transferable discharge permits: An empirical study of water quality management under policy uncertainty”, Environmental and Resource Economics, Vol. 2, pp. 441-458.\nLinden, Gary. 1992. An integrated approach to energy investment: A project level model for Colombia\u0026rsquo;, PhD dissertation, University of Texas, Austin, TX.\nLofgren, Hans. 1993. “\u0026lsquo;Liberalizing Egypt\u0026rsquo;s agriculture: A quadratic programming analysis”, Journal of African Economies, Vol. 2, No. 2, pp. 238-261.\nManne, Alan S., and Harry M. Markowitz. eds. 1963. Studies in Process Analysis. Cowles Foundation Monograph No. 18, John Wiley \u0026amp; Sons, Inc.\nMarkowitz, Harry M., and Alan S. Manne. 1957. “On the Solution of Discrete Programming Problems,” Econometrica, Vol. 25, No. 1.\nMennes, Loet B. M., and Ardy J. Stoutjesdijk, 1985. Multicountry Investment Analysis. The Johns Hopkins University Press.\nSamuelson, Paul. A. 1952. \u0026ldquo;Spatial Price Equilibrium and Linear Programming.\u0026rdquo; American Economic Review, vol. 42, pp. 283-303.\nThompson, Gerald L., and Sten Thore. 1992. Computational Economics: Economic Modeling with Optimization Software. San Francisco, CA: The Scientific Press.\nThore, Sten. 1991. Economic Logistics: The Optimization of Spatial and Sectoral Resource, Production, and Distribution Systems. Praeger.\nThis blog complements the in memoriam published by the Society for Computational Economics. (https://comp-econ.com/in-memoriam-david-a-kendrick-1937-2024/) .\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nIn addition to David’s courses, students in the department were well served by economics-oriented operations-research courses taught by Sten Thore, an early promoter of GAMS; see Thore (1991), and Thompson and Thore (1992)\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","excerpt":"It was with great sadness that we and many other colleagues, students, and friends learned about the passing of David Kendrick, Emeritus Professor of Economics at the University of Texas at Austin. He died on April 7, at an age of 86.","ref":"/blog/2024/06/remembering-david-kendrick-1937-2024/","title":"Remembering David Kendrick (1937-2024)"},{"body":"","excerpt":"","ref":"/authors/abhosekar/","title":"Atharv Bhosekar"},{"body":"GAMS Transfer R was first released with the GAMS major release in August 2022 and has been included in all subsequent GAMS releases. Recently, we made GAMS Transfer R open-source and available on CRAN . In this blog post, I will provide a brief overview of GAMS Transfer R, what it is, who it is aimed to help, and how to use it.\nWhat is gamstransfer and why do I need it? While GAMS syntax is powerful, it is not a general programming language. A user might prefer relying on their language of preference to perform tasks that don\u0026rsquo;t necessarily require GAMS, such as data processing and I/O from various data sources. For users working with R as their preferred language, gamstransfer is a package that enables seamless data exchange with GAMS. It offers object-oriented and intuitive syntax for reading and writing GDX files, understanding, analyzing, and modifying GAMS data in R. With internal C++ power calls, gamstransfer is highly performant and enables the transfer of bulk data to GAMS rather than handling records for individual symbols.\nHow to install gamstransfer? gamstransfer is available on CRAN and can be installed with a single command from your R console:\ninstall.packages(\u0026#34;gamstransfer\u0026#34;) Design philosophy gamstransfer aligns with the philosophy of other products in the transfer family, such as transfer Python and transfer Matlab . The core idea is to use a Container that encapsulates all data. A Container is state-aware, maintains links between symbols (e.g., domain links) and enables analyses and operations across multiple symbols. Read and write operations take place through Container methods read and write.\nQuick start example Reading a GDX file gams_data.gdx is a matter of just one power call. Here is an example of reading the data for the transport model from the GAMS model library.\nlibrary(gamstransfer) m = Container$new(\u0026#34;trnsport.gdx\u0026#34;) To access the parameter containing distances from this data, one can do m[\u0026ldquo;d\u0026rdquo;]. To access the records, m[\u0026ldquo;d\u0026rdquo;]$records can be used. Currently, symbol records are stored in R data.frame format.\nSuppose the data is in R (from any source such as Excel, SQL and so on..), writing it to a GDX file is easy as illustrated with the following example. Here, we use the data for the transport model again. The steps for doing this are as follows:\nCreate a Container Add symbols to the Container Use the $write power call. library(gamstransfer) m = Container$new() # create the sets i, j i = Set$new(m, \u0026#34;i\u0026#34;, records = c(\u0026#34;seattle\u0026#34;, \u0026#34;san-diego\u0026#34;), description = \u0026#34;supply\u0026#34;) j = Set$new(m, \u0026#34;j\u0026#34;, records = c(\u0026#34;new-york\u0026#34;, \u0026#34;chicago\u0026#34;, \u0026#34;topeka\u0026#34;), description = \u0026#34;markets\u0026#34;) # add \u0026#34;d\u0026#34; parameter -- domain linked to set objects i and j d = Parameter$new(m, \u0026#34;d\u0026#34;, c(i, j), description = \u0026#34;distance in thousands of miles\u0026#34;) # create some data as a generic data frame dist = data.frame( from = c(\u0026#34;seattle\u0026#34;, \u0026#34;seattle\u0026#34;, \u0026#34;seattle\u0026#34;, \u0026#34;san-diego\u0026#34;, \u0026#34;san-diego\u0026#34;, \u0026#34;san-diego\u0026#34;), to = c(\u0026#34;new-york\u0026#34;, \u0026#34;chicago\u0026#34;, \u0026#34;topeka\u0026#34;, \u0026#34;new-york\u0026#34;, \u0026#34;chicago\u0026#34;, \u0026#34;topeka\u0026#34;), thousand_miles = c(2.5, 1.7, 1.8, 2.5, 1.8, 1.4) ) # setRecords will automatically convert the dist data frame into # a standard data frame format d$setRecords(dist) Note how for the sets, the records are passed as a vector and that for the parameter d, it is passed as a data.frame. Once the data is loaded into a Container, writing it to a GDX file is easy.\nm$write(\u0026#34;trnsport.gdx\u0026#34;) Under the hood gamstransfer leverages the object-oriented programming capabilities provided by the R6 package. All symbols and containers are R6 objects, enabling gamstransfer to pass data by reference and maintain reliable links between symbols. Additionally, gamstransfer utilizes the new and open-source C++-based GDX API and the Rcpp package in R, ensuring high performance for read and write operations. We routinely test gamstransfer on datasets that have hundreds of millions of records.\nTransition from GDXRRW So far, R users have relied on the GDXRRW tool. With the advent of gamstransfer, GDXRRW is now deprecated and will no longer be shipped with GAMS.\nProviding Feedback and Reporting Issues For feedback, feature requests, or bug reports, please contact support@gams.com or create an issue in the gamstransfer GitHub repository .\n","excerpt":"GAMS Transfer R was first released with in August 2022 and has been included in all subsequent GAMS releases. Recently, we made GAMS Transfer R open-source and available on CRAN. In this blog post, we will provide a brief overview of GAMS Transfer R, what it is, who it is aimed to help, and how to use it.","ref":"/blog/2024/06/gams-transfer-r/","title":"GAMS Transfer R"},{"body":"","excerpt":"","ref":"/categories/gams-transfer-r/","title":"GAMS Transfer R"},{"body":"","excerpt":"","ref":"/authors/jbroihan/","title":"Justine Broihan"},{"body":" Introduction In the dynamic landscape of modern commerce, where competition is fierce and customer expectations are ever-evolving, the need for businesses to continuously optimize their operations has become paramount. Whether you\u0026rsquo;re a budding startup or an established enterprise, embracing the philosophy of optimization can be the difference between stagnation and sustainable growth.\nIn this blog post, we delve into the fundamental reasons why optimizing your business processes and resources is not just a luxury but a necessity for long-term success. From enhancing efficiency and maximizing productivity to minimizing cost and efficiently allocating resources, the benefits of optimization are multifaceted and far-reaching.\nTo get started, let\u0026rsquo;s delve into the example of painting cars in car manufacturing. This scenario serves as a practical illustration of the complexities involved in optimizing decisions, where efficiency and accuracy are paramount.\nThe Binary Paintshop Problem In the complex world of car manufacturing, each step in the production process plays a crucial role in ensuring efficiency and quality. One such critical step is painting, where cars receive their final aesthetic touch before hitting the roads. Imagine a scenario where cars of different types (A to F) arrive on a conveyor belt at a paintshop in a specific sequence, as depicted in the diagram below:\nEach car needs to be painted with a base coat that can either be white or black.\nTo simplify this scenario, let\u0026rsquo;s consider a minimal working example:\nEach vehicle type (A to F) arrives exactly twice in the sequence. One car of each type must be painted white, while the other must be painted black. The order of the arriving cars cannot be adjusted. Changing the color consumes time and resources. The objective here is to minimize the number of color changes while ensuring that each vehicle type receives both white and black paint.\nThis scenario encapsulates what is known as the Binary Paintshop Problem, a classic optimization problem that illustrates the challenges businesses face when trying to minimize resource usage while meeting specific constraints.\nYour Turn Now, imagine yourself in the driver\u0026rsquo;s seat, tasked with deciding which car receives a coat of white and which gets painted black. Grab a pair of pens with contrasting colors, and let\u0026rsquo;s dive into the challenge. Take the sequence of letters representing each car type and strategize how to color each letter with the fewest possible color changes, ensuring that every car type receives both white and black paint.\nMost individuals I challenge with this task typically arrive at the same conclusion: four color changes are required. Here\u0026rsquo;s how they typically paint the cars:\nShow solution Now, you are tasked with explaining the solution process to a coworker to guide them through the challenge. Consider the steps they need to take to efficiently tackle the problem.\nHere\u0026rsquo;s a common approach that most people I challenge with this task tend to formulate:\nShow solution Begin by coloring the first arriving vehicle type with white. Persist with white paint as long as possible, until the first vehicle type arrives for the second time. Transition to black paint and utilize it as long as feasible. Repeat this alternating pattern until every car is painted. Algorithms and Heuristics This approach is what we call a Greedy Algorithm or Greedy Heuristic. A greedy algorithm is a simple and intuitive approach to solving optimization problems. It makes a series of locally optimal choices at each step with the hope of finding a global optimum. In other words, at each step, it selects the best available option without considering the future consequences. Greedy algorithms are often fast and easy to implement, but they may not always produce the best solution for complex problems. Many companies already make use of similar approaches, especially when structuring tasks in Excel Workbooks or VBA macros, aiming for quick and practical solutions.\nWe can harness the power of the described greedy algorithm by implementing it, for instance, in Python. This allows us to efficiently apply the defined rules on how to color the arriving car types, even for larger sequences. By translating the problem-solving strategy into code, we can automate the process and expedite the optimization of resource usage.\nchanges = 0 colors = {\u0026#34;white\u0026#34;: set(), \u0026#34;black\u0026#34;: set()} result = [] def paint_car(colors_dict, result_list, color, car_type): colors_dict[color].add(car_type) result_list.append(color) current_color = \u0026#34;white\u0026#34; for car in sequence: if car not in colors[current_color]: paint_car(colors, result, current_color, car) else: # change color changes += 1 if current_color == \u0026#34;white\u0026#34;: current_color = \u0026#34;black\u0026#34; else: current_color = \u0026#34;white\u0026#34; paint_car(colors, result, current_color, car) Applying the code above to our ADEBAFCBCDEF sequence, we yield the expected four color changes.\n\u0026gt;\u0026gt;\u0026gt; print(result) [\u0026#39;white\u0026#39;, \u0026#39;white\u0026#39;, \u0026#39;white\u0026#39;, \u0026#39;white\u0026#39;, \u0026#39;black\u0026#39;, \u0026#39;black\u0026#39;, \u0026#39;black\u0026#39;, \u0026#39;black\u0026#39;, \u0026#39;white\u0026#39;, \u0026#39;black\u0026#39;, \u0026#39;black\u0026#39;, \u0026#39;white\u0026#39;] \u0026gt;\u0026gt;\u0026gt; print(\u0026#34;Number of changes:\u0026#34;, changes) Number of changes: 4 Mathematical Optimization An alternative to heuristic solution approaches, such as the presented greedy algorithm, is mathematical optimization. With mathematical optimization we shift the perspective from prescribing rules to yield a solution towards precisely defining and describing the problem we try to solve. By formulating the problem mathematically, we can articulate the objective, constraints, and decision variables in a rigorous manner. This approach enables us to explore the problem space more systematically, leveraging mathematical techniques to identify optimal solutions efficiently.\nBelow, you will find the mathematical representation of the Binary Paintshop Problem, for which we\u0026rsquo;ve devised the greedy algorithm:\n$$\\min F = \\sum_{i \\in \\mathcal{I} \\hspace{0.75mm} | \\hspace{0.75mm} i \u0026lt; |I|} (X_i - X_{i+1})^2 \\tag{1} $$\n$$ \\sum_{i: (i,j) \\in \\mathcal{IJ}} X_i = 1 \\hspace{1cm} \\forall \\ j \\in \\mathcal{J} \\tag{2} $$\n$$ X_i \\in \\lbrace 0, 1\\rbrace \\hspace{1cm} \\forall \\ (i,j) \\in \\mathcal{IJ} \\tag{3} $$\nTo understand the importance of mathematical optimization especially compared to using heuristic approaches, you don’t need to read or even understand the details of the mathematical model and you can directly skip to the results . However, if you are interested in the model formulation, here is a brief explanation.\nAs a preliminary step, we introduce two distinct sets: $\\mathcal{I}$ and $\\mathcal{J}$. Set $\\mathcal{I}$ contains all positions within our sequence, numbered sequentially, while set $\\mathcal{J}$ comprises each unique vehicle type, represented by letters A to F.\nTo define the order of vehicle type arrivals at the paintshop, we introduce the set $(i,j) \\in \\mathcal{IJ}$. Each element in this set corresponds to a specific position $i$ in the sequence, indicating which vehicle type $j \\in \\mathcal{j}$ arrives at that point. For our ADEBAFCBCDEF sequence, $(1,A):\\mathcal{IJ}$ represents car type $j=A$ arriving at position $i=1$, and $(2,D):\\mathcal{IJ}$ specifies car type $j=D$ arriving at position $i=2$, and so forth.\nWithin our mathematical notation, $X_i$ embodies the core decision-making process. It represents the choice regarding the color used at position $i \\in \\mathcal{I}$ within our sequence. For instance, $X_3 = 1$ signifies painting the vehicle at position 3 black, while $X_3 = 0$ indicates using white.\nThe fundamental rule governing our paintshop problem dictates that each vehicle type $j \\in \\mathcal{J}$ must be painted black exactly once. Thus, for every vehicle type $j \\in \\mathcal{J}$, we sum over all $X_i$ across $(i,j) \\in \\mathcal{IJ}$ and enforce this sum to be exactly one, ensuring adherence to the rule and represented by Equation (2). For example, for the sequence ADEBAFCBCDEF, the equations derived from this rule would appear as follows:\n$$ \\text{Equation (2)}_A: X_1 + X_5 = 1 $$ $$ \\text{Equation (2)}_B: X_4 + X_8 = 1 $$ $$ \\text{Equation (2)}_C: X_7 + X_9 = 1 $$ $$ \\text{Equation (2)}_D: X_2 + X_10 = 1 $$ $$ \\text{Equation (2)}_E: X_3 + X_11 = 1 $$ $$ \\text{Equation (2)}_F: X_6 + X_12 = 1 $$\nTo establish our objective, we aim to minimize the number of color changes within our sequence. This objective is achieved by examining each position $i \\in \\mathcal{I}$ in our sequence and assessing whether its successor $i+1$ refers to a different color, as outlined in Equation (1). When the difference $X_i−X_{i+1}$ equals $1$ or $-1$, it signifies a change in color, incrementing our objective by one. Conversely, a difference of zero indicates no color change.\nOur overarching aim is to minimize the total number of color changes, thus necessitating the definition of our objective as a minimization function.\nWith a precise mathematical representation of the problem, we can utilize off-the-shelf optimization solvers to solve our paintshop problem with mathematically proven optimality, which ensures that no superior solution exists within the constraints of our problem.\nOptimal solution Upon solving the optimization problem, we obtain the optimal number of color changes, resulting in a value of two in our example. Additionally, we derive optimal values for our decision variables $X_i$ leading to the following colored sequence:\nTransitioning to Real-world Challenges: The Multi-Vehicle Paintshop Problem In my experience presenting the Binary Paintshop Problem to a broader audience, there\u0026rsquo;s often someone astute enough to start coloring the given sequence from the back, ultimately arriving at the optimal solution of two color changes. This observation offers valuable insight: when tackling problems heuristically, where the approach to reaching a solution is specified, there can be multiple viable pathways to a solution. Moreover, these solutions can vary significantly from one another. However, the crucial question remains:\nHow do we discern the most effective approach to reaching a solution?\nIt\u0026rsquo;s important to note that starting from the back only proves efficacious for this particular sequence.\nAlso, it is crucial to remember that the example discussed here serves as a simplification of a real-world problem. In reality, we deal with a larger sequence of arriving car types, where each type may arrive any number of times. Additionally, we must accommodate specific demands regarding the number of cars of each type that need to be painted both white and black.\nWe can easily adapt our Greedy algorithm to account for specific color demands:\nchanges = 0 colors = {\u0026#34;white\u0026#34;: dict(demand_white), \u0026#34;black\u0026#34;: dict(demand_black)} result = [] def paint_car(colors_dict, result_list, color, car_type): colors_dict[color][car_type] -= 1 result_list.append(color) current_color = \u0026#34;white\u0026#34; for car in sequence: if colors[current_color][car] \u0026gt; 0: paint_car(colors, result, current_color, car) else: # change color changes += 1 if current_color == \u0026#34;white\u0026#34;: current_color = \u0026#34;black\u0026#34; else: current_color = \u0026#34;white\u0026#34; paint_car(colors, result, current_color, car) Running the above code for a random sequence of arriving car types of length 128 results in a coloring with 31 color changes.\n\u0026gt;\u0026gt;\u0026gt; print(\u0026#34;Number of changes:\u0026#34;, changes) Number of changes: 31 Now, let\u0026rsquo;s proceed to adapt our mathematical model to accommodate these real-world complexities.\n$$\\min F = \\sum_{i \\in \\mathcal{I} \\hspace{0.75mm} | \\hspace{0.75mm} i \u0026lt; |I|} (X_i - X_{i+1})^2 \\tag{4} $$\n$$ \\sum_{i: (i,j) \\in \\mathcal{IJ}} 1 X_i = d^{black}_j \\hspace{1cm} \\forall \\ j \\in \\mathcal{J} \\tag{5} $$\n$$ \\sum_{i: (i,j) \\in \\mathcal{IJ}} 1 - X_i = d^{white}_j \\hspace{1cm} \\forall \\ j \\in \\mathcal{J} \\tag{6} $$\n$$ X_i \\in \\lbrace 0, 1\\rbrace \\hspace{1cm} \\forall \\ (i,j) \\in \\mathcal{IJ} \\tag{7} $$\nIt\u0026rsquo;s worth noting that we retain the same objective function (Equation (1) is the same as Equation (4)) in our adapted mathematical model. However, to accommodate the color demands, we introduce two new constraints. Firstly, by replacing the equality constraint in Equation (2) with the demand for black-colored car types in Equation (5), we ensure that exactly $d^{black}_j$ vehicles are painted black. Secondly, to address the demand for white cars, we employ a similar equation as for black cars (Equation (6)), but sum over $1−X_i$.\nUpon solving our adapted model we arrive at a solution requiring only 11 color changes.\n\u0026gt;\u0026gt;\u0026gt; print(\u0026#34;Number of changes:\u0026#34;, changes) Number of changes: 11 Through the utilization of mathematical optimization in our example, we\u0026rsquo;ve observed significant improvements in decision-making regarding color changes. This enhancement translates to reduced time required for color changes, improved resource utilization, and potentially lowered costs or increased productivity within a given time frame.\nTransfer to Other Problem Types Expanding beyond the realm of the Binary Paintshop Problem, mathematical optimization demonstrates its efficacy across various problem types. While the problem we\u0026rsquo;ve discussed is relatively straightforward, mathematical optimization empowers us to tackle large and intricate real-world problems effectively. Its applications span across diverse fields, including agriculture, logistics, scheduling, energy systems, and many more.\nDo you encounter such complex problems in your domain? Feel free to reach out to consulting@gams.com for assistance in finding optimized solutions tailored to your specific needs. A Jupyter Notebook with all relevant code snippets can be accessed here .\n","excerpt":"This blog post discusses the necessity of business optimization through the Binary Paintshop Problem, illustrating how businesses can minimize resource usage and improve efficiency by adopting optimization strategies. Using a simple example, the post showcases effective methods to tackle operational challenges, encouraging businesses to enhance processes for better productivity and cost management.","ref":"/blog/2024/05/why-your-business-needs-optimization/","title":"Why Your Business Needs Optimization"},{"body":" GAMS Academic Program At GAMS we are proud of our long and fruitful collaboration with the academic community. Over the years we have worked closely with universities and research institutions worldwide, fostering innovation and contributing to groundbreaking research. Our Academic Portal offers FREE + full featured GAMSPy licenses, as well as GAMS community licenses.\nGet your free license now! Our Product Line for Academics The Future of Algebraic Modeling in Python Combining the high-performance GAMS system with the flexibility of Python, GAMSPy provides an intuitive and efficient way to develop complex mathematical models. With GAMSPy, you can easily create abstract, data-independent models while handling sparse data structures –all while managing the entire optimization pipeline from Python. Controlling the full process within a single, versatile environment makes GAMSPy an ideal resource for teaching, learning, and conducting research. GAMSPy also performs a lot better with larger models than many alternatives, which means it’s a great tool for those working on complex projects. Plus, with a free and full-featured GAMSPy License, we include commercial solvers with no limitations. More info on licensing is available below. Open Source Solvers: Full access to all available open-source solvers. Commercial Solvers: Integrated licenses for select commercial solvers, including CPLEX, XPRESS/Global, COPT, and MOSEK. Additionally, link-licenses are available for GUROBI. Transport Example This classic scenario involves managing supplies from various plants to meet demands at multiple markets for a single commodity. Nurses example The NURSES problem involves managing the assignment of nurses to shifts in a hospital. Nurses must be assigned to hospital shifts in accordance with various staffing constraints. Pickstock Example The goal is to pick a small subset of stocks together with some weights, such that this portfolio has a similar behavior to our overall Dow Jones index. GAMSPy is open source. Find out how to get started in the GAMSPy documentation.\nThe modeling system used by domain experts GAMS is a powerful modeling language designed for building and solving large-scale, complex mathematical models with ease. Renowned for its clarity and efficiency, GAMS allows users to develop highly structured, data-independent models that can be easily scaled and adapted.\nGAMS is trusted by researchers and professionals worldwide for its robustness and flexibility.\nDownloads are available for Windows, Mac and Linux.\nThe Open Source Application Generator for GAMS and GAMSPy Models GAMS MIRO (Model Interface with Rapid Orchestration) is a solution that makes it easy to turn your GAMS and GAMSPy models into interactive end user applications that you can distribute to your colleagues and even host on a webserver.\nThe user friendly interface allows you to interact with the underlying GAMS model, quickly create different scenarios, compare results and much more. MIRO's extensive data visualization capabilities provide you with the ability to create powerful charts, time series, maps, widgets, etc. with ease.\nGAMS MIRO is open source and free for all academic users!\nFinding the right license for you Your GAMS journey begins by choosing the best licensing option, and whether you’re a student, a teacher, or a researcher, we got you covered. Teachers Students Researchers Free \u0026 Full-featured! Academic License No size restrictions, solvers included! Free! GAMS Community License The full GAMS distribution with size restrictions. Pro! Academic GAMS License The full GAMS distribution with 80 % academic discount. Our licenses come in different technical flavours. Choose the one that fits your use case. Local License For GAMSPy or GAMS community edition. For individual users. Activate on up to 2 fixed nodes (e.g. a laptop and a PC). No internet connection required during use. Get your free local license! Network License For GAMSPy or GAMS community edition. For cloud based environments. Activate on up to 2 concurrent, changeable nodes. Works on any internet connected device. Get your free network license! Compute Cluster License For GAMSPy only For university system administrators Works for all users of the cluster Requires running a license server on the cluster Contact academic@gams.com Don’t have an account yet? Sign up now to access our academic user portal!\nProGAMS Academic Licenses Unlock the full power of GAMS with our Pro GAMS Academic License. Enjoy unrestricted access to all solvers, enabling you to work with large, well-established models such as ETSAP-TIMES , MAgPIE, and more.\nEach license includes professional, PhD-level support, ensuring you have the expertise needed to succeed. The license is perpetual for GAMS , and we're offering a substantial discount off the regular pricing.\nAdditionally, every professional license comes with a time-limited version of GAMSPy that includes the solvers you’ve purchased.\nFor more information, please contact us at sales@gams.com.\nThe GAMS Forum Do you have questions, issues, or feedback? Don’t forget to check out our community forum and get all the help you need from fellow GAMS and GAMSPy users.\nGet help from other experts Enjoy a friendly atmosphere Access our comprehensive FAQs Explore categories for all our products Stay updated with the latest announcements Leave feedback for our developers and report bugs ","excerpt":"\u003csection\u003e\n \u003cdiv class=\"full-width\"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid academic-hero\"\u003e\n \u003cdiv class=\"container\"\u003e\n \u003ch1 class=\"display-1 mt-3\"\u003e\n GAMS Academic Program\n \u003c/h1\u003e\n \u003cp class=\"lead\"\u003e\n At GAMS we are proud of our long and fruitful collaboration with the academic community. Over the years we have worked closely with universities and research institutions worldwide, fostering innovation and contributing to groundbreaking research. \u003c/p\u003e\n\n \u003cp class=\"lead\"\u003eOur Academic Portal offers \u003cstrong\u003eFREE + full featured GAMSPy licenses\u003c/strong\u003e, as well as \u003cstrong\u003eGAMS community licenses\u003c/strong\u003e.\u003c/p\u003e","ref":"/academics/","title":"GAMS Academic Program"},{"body":"Sun, Fun, and Data: GAMS Takes Orlando by Storm! This April, the 2024 INFORMS Analytics Conference brought together over 700 analytics professionals in the sunny, palm-lined streets of Orlando, Florida. At GAMS, we were thrilled to connect with colleagues, discuss new mathematical solutions to business problems, and share our latest advancements in the integration of GAMS with Python.\nOur participation in the conference was marked by two major events: the Exhibitor Workshop and the Technology Showcase. These sessions provided us with an excellent platform to demonstrate our innovative tools and discuss the practical applications of GAMS in the real world.\nOur GAMS Exhibitor Workshop: GAMS for Python Users Hosted by Atharv Bhosekar and Adam Christensen, our workshop showcased the robustness of GAMS Python APIs. We explored real-world use cases where GAMS optimizes decision support systems alongside Python’s data-handling capabilities. Our new offering, GAMSPy, was highlighted as a groundbreaking tool that merges the high-performance GAMS execution system with the flexibility of Python, all within a single environment.\nOur GAMS Technology Showcase: Introducing GAMSPy Our showcase, also led by Atharv and Adam, centered on GAMSPy. This new tool is designed to simplify mathematical optimization by acting as a bridge between Python and GAMS, allowing for the effortless creation of complex mathematical models. Attendees got an in-depth look at GAMSPy\u0026rsquo;s unique features and benefits, enhancing their optimization solutions.\nThe conference was not only about presentations and technical discussions. It was an opportunity to enjoy Orlando’s vibrant atmosphere and engage in meaningful networking that helps advance careers and recognizes outstanding contributions to the field.\nWe\u0026rsquo;re already looking forward to the upcoming conferences this year, but in the meantime, we invite you to sign up for our newsletter to stay updated with all things GAMS!\nSign up for our general information newsletter to stay up-to-date! Our Abstracts Our Technical Workshop GAMS for Python users By Atharv Bhosekar and Adam Christensen\nOptimization applications (decision support systems) combine technology and expertise from many different disciplines, including numerical modeling and data science. Python is ubiquitous in data pipelines that instantiate optimization models. GAMS offers several different Python APIs that enable the efficient integration of GAMS and Python – merging the power of a specialized algebraic modeling language with a general programming language. These tools enable application builders to leverage the GAMS language where needed while being flexible enough to bend to many different data pipeline architectures.\nThis session will highlight the entire stack of GAMS/Python APIs and tools with two primary use-cases in mind:\nwhere GAMS is deployed for optimization alongside Python for data-handling and where a single environment is advantageous, such that the algebraic model is developed and solved within a Python environment. Through several real-world examples, we will explore the benefits of GAMS Transfer Python (a data API to exchange data between GAMS and Python) and our new offering GAMSPy. GAMSPy lets you leverage the high performance GAMS execution system all from a single Python environment. Our Technology Showcase\nGAMSPy: Algebraic modeling in Python By Atharv Bhosekar and Adam Christensen\nGAMSPy simplifies mathematical optimization by combining the high-performance GAMS execution system with the flexible Python language. Acting as a bridge between Python and GAMS, GAMSPy enables effortless creation of complex mathematical models. This showcase presents the unique features and benefits of GAMSPy, offering enhanced optimization solutions.\nCheck our presentation slides for more information:\nName: Size / byte: 2024_INFORMS_Analytics_Conference_Workshop.pdf 305100 2024_INFORMS_GAMS_Technology_Showcase.pdf 2414731 ","excerpt":"This April, the 2024 INFORMS Analytics Conference brought together over 700 analytics professionals in the sunny, palm-lined streets of Orlando, Florida. At GAMS, we were thrilled to connect with colleagues, discuss new mathematical solutions to business problems, and share our latest advancements in the integration of GAMS with Python.","ref":"/blog/2024/04/gams-at-the-2024-informs-analytics-in-orlando/","title":"GAMS at the 2024 INFORMS Analytics in Orlando"},{"body":" Area: Energy\nProblem class: LP / MIP\nTechnologies: SaaS, GAMS, GAMS Engine\nThe EIP Energy Investment Planner© optimizes energy supply and demand networks and helps in evaluating investment decisions with an interactive web-based user interface Background Power providers are facing multiple challenges when matching supply and demand in an ever-changing environment. Beyond using existing demand forecasts to optimally operate existing plants and trade energy to ensure delivering the right amount of electricity and heat, additional challenges exist. These include political and ecological developments that demand a shift from fossil primary energy towards renewable, green energy sources such as wind, solar, hydrogen, etc. Also, energy storage plays an increasingly important role in leveling out the given supply variations of several types of green energy.\nFor planning the optimal operating mode of the existing supply network, as well as evaluating scenarios involving new facilities, it is of paramount importance to consider all relevant constraints and parameters. Generally, those are of technical, engineering, financial, and political nature. Such planning scenarios require careful modeling of all relevant factors, ideally in a comprehensive mathematical optimization model.\nThe Solution Based on the long-lasting experience of its founders, proven in several successful projects, ENOSYS built the EIP Energy Investment Planner which helps energy providers such as municipal power providers in optimizing their processes and resources and consequently maximizing their profits while meeting the constraints. This is achieved by applying state-of-the art mathematical modeling and optimization, leading to informed decisions pertaining to tactical planning and strategic investments.\nThe EIP Energy Investment Planner is based on a flexible mathematical optimization model, formulated and implemented in GAMS. Depending on the input data, a linear (linear program, LP) or mixed-integer linear (mixed-integer linear program, MILP) mathematical optimization model is generated and solved with the aim of maximizing total profit over the entire planning horizon within the given revenue and cost structure. While existing constraints such as CO2 emission limits, number of yearly starts or operating hours of facilities, minimum heat supply, primary and secondary energy prices, investment costs and subsidies for new facilities, etc. are considered, the generated GAMS model dynamically adapts to the provided data, resulting in just the right model complexity.\nFig. 1 The EIP Energy Investment Planner© graphical user interface User Interface In the graphical, interactive, web-based user interface shown in Figure 1, users define the topology of their respective energy networks by intuitively placing and connecting components on the canvas, followed by filling in forms with the required parameters, without needing to delve into the details of the mathematical formulation. The optimization results are presented in graphs and tables and can be conveniently exported for individual further processing and reporting (Figures 2 and 3). Additionally, in the solution view, various scenarios can be compared right in the interactive GUI for easily conducting What-If analyses (Figure 4). Note that – while more language versions are in development – the user interface is presently available in German only.\nFig 2. Yearly CHP (combined heat and power) production and usage (example) Fig 3. Electricity production and usage report (example) Fig 4. Comparing costs and revenues of three scenarios (example) Optimization Model Complexity Being a tool for a variety of planning applications in the energy industry, the EIP Energy Investment Planner needs to be ready and fit for a wide range of model sizes and complexities. This ranges from planning a comparatively simple pumped-storage hydropower station with just four processes operating on two resources, to municipal electricity and heat networks with multiple power generation alternatives, emission control, etc. The latter can easily go well over a dozen processes and 5 resources or more, which can further increase when modeling more complex constraints such as green energy-quotas, H2-Electrolysis, etc.\nCertain technical or business constraints will directly impact model complexity: imposing a maximum number of starts for instance, will require solving a mixed-integer linear program (MILP), which is significantly more time consuming than solving a purely continuous linear program (LP). In the testing phase, these “integrality properties” of the created model can be relaxed for quicker model development and prototyping.\nFinally, the planning horizon and time granularity strongly influence model sizes and hence the required time and computing power to solve a model. Depending on the application scenario, time horizons vary from a year to several decades, while typical individual time intervals will range from 15 minutes up to a day. Typically, this results in tens of thousands of time periods.\nArchitecture The EIP Energy Investment Planner is a Software-as-a-Service product. While the GUI and the projects’ database are hosted and are operating on servers in the European Union (Germany), the optimization processes are typically executed via the GAMS Engine SaaS, which provides practically limitless horizontal scaling because it can start as many parallel jobs as required. To cater to specific customer needs, the EIP can optionally use a dedicated optimization server hosted in the European Union.\nFig 5. EIP Energy Investment Planner© architecture Being a Software-as-a-Service product, the EIP Energy Investment Planner is licensed on a yearly basis. Licenses typically include a computation time-quota and come with everything you need to start optimizing. There is no need to separately license GAMS or an optimization engine. Realizing the wide range of planning tasks the EIP is suitable for, licenses can be upgraded at any time. Please get in touch at team@enosys.ltd for discussing your specific needs.\nAbout ENOSYS ENOSYS Energy Optimization Systems Ltd strives for creating value for companies, their clients, and our environment by making innovative mathematical models accessible to energy producers and project developers on a licensing basis without the customer needing to master the details of the mathematical formulation.\nOur Software-as-a-service product, the EIP Energy Investment Planner, enables better decision-making for key aspects of the energy transition, including strategic investment planning and tactical planning. It reduces costs and time for modeling and/or decision-making, enhances returns, and secures company viability.\nhttps://www.enosys.ltd (German)\nWhitepaper on LinkedIn (German):\n","excerpt":"The EIP Energy Investment Planner© optimizes energy supply and demand networks and helps in evaluating investment decisions with an interactive web-based user interface","ref":"/stories/enosys/","title":"Cloud-based optimization of energy supply-demand networks and investment decisions"},{"body":"Introduction In early 2024, the last steps in the integration of the McCarl GAMS User Guide with the GAMS documentation will be completed. The McCarl Guide will no longer be pointed to or available from the GAMS docs, although it is available on Bruce McCarl’s web site or the GAMSWORLD forum (see links below). While these last steps may go unnoticed by many GAMS users, the integration process has resulted in a much better documentation that all have benefited from. Reaching this milestone provides us a good opportunity to look back at the history of the McCarl Guide. In this article, we consider the original purpose of the Guide, the positive impact it has had during the many years of its use, and the reasons for its retirement.\nOrigins of the McCarl Guide The GAMS User’s Guide was originally published in book form in 1988. As such, it was organized linearly (chapters, sections, etc.), had a fairly consistent style (a User’s Guide is neither a reference manual nor a tutorial, but perhaps a compromise between the two), and was formatted to fit on a printed page. The primary search mechanism was the index, which was based on the page numbers used in the book. It even came with a copy of GAMS on a 5.25 inch floppy disk! An updated version of the User’s Guide was published in 1992 in order to keep up with additions, extensions, and changes to GAMS, but updating a published User’s Guide to keep pace with a growing and evolving software product was a losing battle. GAMS eventually shifted to PDF documentation that could be continuously updated, printed and shipped to users, and made available in electronic form via GAMS release and the web.\nDuring this time, Bruce had been developing teaching material for use in his GAMS courses, so he was well-positioned to notice that the GAMS User’s Guide was not keeping pace with all the new features in GAMS and that important new capabilities were not included. Additionally, the PDF format it used didn’t make use of active links and otherwise take advantage of publication via the Web, and Bruce had a more tutorial style in mind. So with encouragement and support from GAMS, Bruce developed the McCarl GAMS User Guide, aka the McCarl Guide: a more complete and up-to-date user’s guide, more tutorial in flavor, that was linked, cross-referenced, searchable, and indexed. Originally developed in Word and made available in 2002, it was redone in the more convenient and powerful CHM format and released in 2006 as the Expanded GAMS Guide (McCarl). The guide was expanded and revised as new GAMS versions were released. With its improved content, organization, and navigability, Bruce\u0026rsquo;s new guide was a boon to all GAMS users, not just those taking his courses. We owe a debt of gratitude to Bruce for this and many other contributions.\nFig 1. Covers of GAMS User\u0026rsquo;s Guide and Solver Manual\nIntegration / Unification While the User’s Guide to the GAMS language was and is the flagship of the line, many other documents existed or were introduced (e.g. solver or tool manuals, API docs). GAMS continued to update and produce these docs and also the original User’s Guide and make them (along with the McCarl Guide) available on the Web. The existence of two Guides was a source of confusion to some users and imposed the burden of keeping not one but two Guides up to date. The difficulties only increased as time went on and differences, inconsistencies and contradictions between the two became more frequent. Finally, GAMS made the decision to move all its documentation to a doxygen-based process that has a unified look and feel, makes extensive use of links, is searchable, is based on text files (and hence git-friendly and uniformly easy to edit), and is targeted solely for Web deployment. As part of this process, the two User’s Guides were merged into one that (we hope) maintains the best qualities of each. Much of the content of the current guide was drawn from the McCarl guide, along with some of the tutorial flavor, although the combined guide is generally more concise and less tutorial than the McCarl guide.\nFinal Thanks The PDF for the McCarl Guide is no longer on the GAMS web site, but the content is not really gone. All of us who use the current User’s Guide benefit from Bruce’s work and the McCarl Guide he produced, and those who knew it well will see evidence of it in the content, organization, and style of the current documentation.\nThank you, Bruce!\nLinks:\nA history of the documentation effort Bruce McCarl’s Documentation Collection and Tools (including the McCarl Guide in PDF, CHM, and source form, along with other resources) Bruce’s web site GAMS content on Bruce’s web site ","excerpt":"\u003ch2 id=\"introduction\"\u003eIntroduction\u003c/h2\u003e\n\u003cp\u003eIn early 2024, the last steps in the integration of the McCarl GAMS User Guide with the GAMS documentation will be completed. The McCarl Guide will no longer be pointed to or available from the GAMS docs, although it is available on Bruce McCarl’s web site or the GAMSWORLD forum (see links below). While these last steps may go unnoticed by many GAMS users, the integration process has resulted in a much better documentation that all have benefited from. Reaching this milestone provides us a good opportunity to look back at the history of the McCarl Guide. In this article, we consider the original purpose of the Guide, the positive impact it has had during the many years of its use, and the reasons for its retirement.\u003c/p\u003e","ref":"/blog/2024/01/the-mccarl-gams-user-guide-history-and-legacy/","title":"The McCarl GAMS User Guide: History and Legacy"},{"body":"","excerpt":"","ref":"/categories/gams-engine/","title":"GAMS Engine"},{"body":"","excerpt":"","ref":"/categories/sso/","title":"SSO"},{"body":"If you use multiple web services (and who doesn\u0026rsquo;t?), you probably know how difficult it can be to manage multiple accounts and passwords, especially in corporate environments: You may have to remember or store different credentials for each service, and, in some companies, update them frequently to comply with security policies. You also run the risk of losing access to your accounts if you forget your passwords. And if someone leaves the team, IT must ensure that access to the service is revoked in a timely manner.\nIn short, password management is a hassle, for both users and IT. That\u0026rsquo;s why many business customers prefer single sign-on solutions. Single sign-on (SSO) is a technology that allows users to log in to multiple web services with a single set of credentials. SSO makes it easier for individual users, because they don\u0026rsquo;t have to remember or store additional credentials. SSO also makes it easy to revoke access when someone leaves the team, by simply removing their account from the SSO provider. To make SSO work, you need an identity provider. Identity providers are services that authenticate users and provide them with access tokens to other web services. Some well-known identity providers are Microsoft with its Active Directory, or Google and Okta with their OpenID Connect implementations.\nOver the past year, several users have approached us asking for support for SSO. So we recently added the ability to integrate GAMS Engine with any identity provider that uses OpenID Connect, OAuth2, or LDAP, all of which enable secure authorization for web services, without sharing passwords. This allows users to log in with their existing accounts from their corporate identity provider.\nHow to add SSO in GAMS Engine Here is a very brief description of how SSO can be setup in GAMS Engine:\nThe GAMS Engine admin adds a new identity provider in GAMS Engine with the correct configuration. This is the hardest part, because the process is slightly different for each different provider (Google, Microsoft, Okta, etc). Inviters in GAMS Engine, i.e. those users who can add additional users to Engine, get the permission to use the newly created identity provider by the Engine admin. Since Engine is \u0026ldquo;invitation only\u0026rdquo;, the process of adding users always has to go via someone with an inviter role. The inviters can now generate and share invitation tokens for new users using the new identity provider. This can be done in two different ways: Without name restriction, i.e. ANYONE using that identity provider can use the invitation token and get access to Engine With name restriction. i.e. access can be limited to a particular user id, e.g. paul@example.com Now the new user will not have to add any password to login to Engine, as long as they are logged in to their Azure, Google, or Okta account. Figure 1 shows how the communication between the participating parties works for OpenID Connect. The process is more or less similar for OAuth or LDAP, details can be found in the GAMS Engine documentation (https://gams.com/engine/administration.html#oidc-identity-providers) .\nFig 1. Authentication process using OpenID Connect. (1) a user requests an ID token from their company\u0026rsquo;s OIDC provider. If the provider knows the user, it will send a valid ID token to the user (2). This token is then sent to GAMS Engine to request an access token (3). If the user account exists in GAMS Engine AND the ID token is valid (4), Engine will send an access token back to the user (5). The user can now use this access token when communicating with GAMS Engine.\nMore convenience and better security As you can see, connecting Engine SaaS to an ID provider makes logging in more convenient for users. Experience has shown that despite the existence of password management tools, people are not very good at managing passwords, especially when the number of different accounts they use for their work increases. Therefore, using single sign-on solutions reduces the number of passwords that have to be managed, and increases IT security for most people.\nOn top of that, if you use a third-party identity provider, any multi factor authentication capabilities offered by that provider automatically applies to GAMS Engine. For example, if your identity provider requires you to enter a code from your phone or a biometric scan to log in, you will also need to do that to access GAMS Engine. This feature should always be enabled, especially considering the growing number of cases of identity theft. In a bad case scenario, an attacker who gains access to your OpenID Connect or OAuth2 profile could also login to all the connected services without any further checks. Therefore, turn on your multifactor authentication, not just for GAMS Engine!\nFinal remarks You should not ignore the risks when choosing a provider for your single-sign-on needs. That provider will sit at a very central position of your IT infrastructure and presents a potential single point of failure. Make sure that you trust the provider and select one with a good reputation for not losing their customers data, and with a good uptime. Please contact us at support@gams.com , if you need any help with this.\n","excerpt":"If you use multiple web services, you probably know how difficult it can be to manage multiple accounts and passwords, especially in corporate environments. GAMS Engine allows you to use LDAP or OAuth2 for single sign on and make user management for organizations much easier.","ref":"/blog/2024/01/using-single-sign-on-with-gams-engine/","title":"Using Single-Sign-On with GAMS Engine"},{"body":"","excerpt":"","ref":"/categories/api/","title":"API"},{"body":"","excerpt":"","ref":"/categories/gdx/","title":"GDX"},{"body":"On August 7th in 2000, the first version of GDX was distributed as part of GAMS release 19.4. Two decades later, we are happy to finally publish the source code of the expert-level API for the GAMS Data eXchange (GDX) to GitHub on November 14th in 2023 . As this code publication is accompanied with a MIT-like license , this effectively makes the GDX API open-source software and implicitly documents the internal layout of the GDX file format. To celebrate setting an important core piece of GAMS technology free, this blog post is shedding some more light on what GDX actually is (both the file format and the API), why it was created, how it is used in the GAMS ecosystem today, and the effort that went into getting it ready for the open-source release. If you just want to take a look at the GitHub repository, you should follow this link .\nThe GDX file format and API The GAMS modeling language is very well suited for formulating mathematical optimization models in a way that is very close to the algebraic mathematical notation used by modelers. For small model instances, the language provides suitable data definition directives, like the table definition . In practice, instances often consist of large amounts of data. To load data easily and efficiently into a GAMS model, the GDX format and API was developed with key contributions coming from Paul van der Eijk. There are multiple benefits to storing model data in a GDX file instead of using textual representations like CSV or data definition syntax in GAMS language:\nSpace efficiency: GDX stores symbol records with the smallest datatypes possible and allows optional compression. Erwin Kalvelagen documented the significant size advantage of uncompressed and compressed GDX files in comparison to CSV or SQLite in multiple posts on his blog \u0026ldquo;Yet Another Math Programming Consultant \u0026rdquo; (see 1 , 2 , 3 ). Increased performance: Parsing textual data encodings is slower than reading a dense binary format with well-defined structure. Persistency: GDX is a persistent staging database and represents a frozen snapshot of the model data. Other data sources like (relational) database systems can change frequently. Therefore GDX can be very helpful for reproducing a particular state during debugging. Platform independence: Files are portable and can be passed between Windows, Linux, and macOS machines with arbitrary endianness. Ease of loading und saving: The sets and parameters of a model can be populated from the data inside a GDX with often just 1-2 lines of code necessary. Unix philosophy : Instead of GAMS reading and writing directly various formats, GDX as staging database allows multiple specialized and highly parameterized tools to deal with the diverse zoo of formats. Versioning: Each GDX file stores version information, which allows future GAMS versions to still read/understand GDX files written years ago. GDX as central component of the GAMS ecosystem Due to the previously listed advantages, GAMS makes intensive use of GDX and provides useful tooling for dealing with GDX files. Reading from and writing GDX files from inside a GAMS model is very easy, as the GAMS language offers multiple commands for these tasks. Hence GDX is well suited to store data for a model instance, or the results of an optimization run. GAMS also supports writing just a subset of specific symbols to a GDX file or only selectively reading from it.\nIt is easy to inspect a GDX file from the command line with gdxdump (see here ) and with a graphical user interface called \u0026ldquo;GDX Viewer\u0026rdquo; inside of GAMS Studio . Furthermore, the GAMS distribution comes with multiple tools to convert data from various formats into GDX and vice versa. GAMS Connect offers a very generic way of building data processing pipelines that can convert to and from GDX inside of it. Many workflows in the GAMS ecosystem employ GDX files at one point or another. For example, GDX files are a good way to submit large chunks of data for GAMS Engine jobs.\nMaking GDX source code available for everyone GAMS has traditionally built many software components in Pascal (and its object-oriented extension Delphi) instead of the now more prevalent C (and C++). While Pascal and its derivates arguably offer better readability due to a less terse syntax and increased safety from strong typing, it is undeniable the C language and its offsprings are the dominant languages for programming performance-critical applications. Hence, the first step towards open sourcing GDX involved translating the Delphi source code into C++17.\nTo make sure the resulting ported library is writing and reading GDX files in the correct way without increased runtime and memory consumption, we ran validation tests and benchmarks for a large library of heterogeneous GDX files. Additionally, we wrote a new suite of unit tests for the GDX library.\nThe open-source publication of GDX also includes a continuous integration pipeline for GitLab which builds the GDX library on all supported platforms (Windows, macOS, Linux), runs the unit test suite with memory leak checking, does a performance comparison with the Delphi GDX library, and generates the HTML documentation and serves it on GitHub pages .\nMaking the source code of GDX openly available together with the required steps for building the dynamic library used by many tools inside and outside the GAMS distribution, everyone can now build their own modified GDX library and maintain it independently of GAMS. Besides implicitly documenting the file layout, the open-source publication significantly reduces the risk of not being able to work with GDX files on future platforms or systems.\n","excerpt":"Over 23 years after its inception, the GDX library is now ported to C++17 and its source code freely available on GitHub. This implicitly documents the layout of the GDX file format and gives everyone the ability to build and maintain their own version of the GDX library.","ref":"/blog/2023/12/gdx-source-code-published-on-github/","title":"GDX source code published on GitHub"},{"body":"","excerpt":"","ref":"/categories/github/","title":"GitHub"},{"body":" We were thrilled with the success of our Christmas party this year, a celebration that has become a cherished GAMS tradition. As is customary, we kicked off the festivities in November at HENKs Kuechen.bar in Braunschweig. We cooked together, engaged in enriching conversations, and enjoyed a wonderful evening at the bar with delightful drinks. We were very happy that almost all of our colleagues were able to take part, even though many of them had to travel a long way. Even Stefan brought his little dog Kaya.\nIn the spirit of this year, we\u0026rsquo;ve just wrapped up an incredible post-Christmas business event, featuring a series of fantastic presentations from our colleagues. The atmosphere was filled with enthusiasm as we gathered to share insights and celebrate our achievements. To top it off, we enjoyed delicious pizza, adding an extra layer of camaraderie to the occasion.\nCheers to the team and the memories we create together!\nMerry Christmas from the GAMS teams!\n\u0026times; Previous Next Close ","excerpt":"This year\u0026rsquo;s GAMS Christmas party was a fantastic success, with the entire team joyfully participating, including colleagues from the US and other countries. A cherished tradition, the celebration unfolded at Henks Küchen.bar in Braunschweig, where cooking together added to the festive spirit.","ref":"/blog/2023/11/the-2023-gams-christmas-celebration/","title":"The 2023 GAMS Christmas Celebration"},{"body":"We are still excited about the official release of GAMSPy, our interpretation of a convenient and performant modeling framework in Python. This blog post is dedicated to providing in-depth insights and answers to the most common questions you may have about GAMSPy.\nWhat is GAMSPy? GAMSPy represents the fusion of the high-performance GAMS compiler and execution system with the ease of use of a versatile programming language like Python. GAMSPy streamlines all the necessary components for a smooth and efficient optimization experience by allowing mathematical models to be written directly in Python with the convenience of our Pythonic interpretation of the proven GAMS syntax. For a more in-depth understanding, please refer to our detailed blog post .\nWhat prompted the release of GAMSPy and who is the intended audience? Throughout our years of experience, we have worked with many professionals who have expertise in optimization. We have found that these individuals can be broadly categorized into two distinct groups: coding enthusiasts and coding skeptics. The coding skeptics tend to focus on the mathematical model, often preferring to do minimal programming. Tasks such as data manipulation and visualization are often done in alternative software such as Excel. Coding enthusiasts, on the other hand, typically have a background in computer science and prefer to take full control of the programming, wanting to oversee every stage of their optimization pipeline. They tend to work with versatile programming languages and use tools like Numpy and Pandas for pre- and post-processing of data.\nWhile the GAMS modeling language with its domain specific nature perfectly satisfies the coding skeptics by freeing them from extensive programming tasks and enabling the composition of models using a syntax closely resembling algebraic notation, the coding enthusiasts express a yearning for greater adaptability. They voice concern about the inconvenience of switching between different environments for tasks involving data manipulation and optimization.\nWith GAMSPy we now embrace the community of coding enthusiasts. No need to switch between environments anymore. GAMSPy allows you to streamline the entire optimization pipeline in Python: from initial data input, through data cleansing and preprocessing, to model, symbol, and constraint declaration, model solving, and post-processing.\nHow is GAMSPy different from GAMS? GAMSPy uses the GAMS execution system to do the heavy lifting. Hence, they share the same idea of how to use sets and declare indexed constraints. They both allow data independent modeling and include a variety of different solvers. The main difference boils down to personal preference. Coding enthusiasts will fall in love with GAMSPy, while coding skeptics rather stick with the standalone and plain GAMS modeling language.\nWill GAMSPy replace GAMS? It\u0026rsquo;s important to clarify that GAMSPy is not intended to replace or supersede the standalone GAMS modeling language; rather, it serves as a complementary tool within the optimization tool stack. Both GAMS modeling language and GAMSPy use the robust GAMS compilation and execution system and offer similar functionality. The choice between them comes down to individual preference. This ensures that users can choose the tool that best suits their coding style and preferences, fostering a harmonious integration of both within the optimization landscape.\nHow is GAMSPy different from other Python based modeling frameworks? The main difference between GAMSPy and other modeling frameworks based on the Python language is that GAMSPy offloads the heavy lifting to the efficient and robust GAMS compilation and execution system. While Pyomo and similar modeling frameworks depend on Python\u0026rsquo;s relatively slower execution of for loops and list comprehensions to represent indexed constraints, GAMSPy leverages the well established idea of representing model objects in a declarative and pythonic way. This strategic choice results in superior out-of-the-box performance.\nWhile conventional modeling frameworks based on the Python language generate model instances where the mathematical formulation is already resolved into its individual components and populated with instance data, these instances often grow big and challenging to manage. In adherence to the GAMS philosophy, GAMSPy takes a different approach, creating indexed constraints and data-independent models. This design choice not only aligns with GAMS principles but also facilitates easier maintenance and handling of models, offering a notable advantage over alternative frameworks.\nIn which scenario do I benefit from using GAMSPy instead of GAMS? If you happen to be a coding enthusiast who has been using a gluecode methodology — for example, creating, a Python script for data pre-processing and post-processing, supplemented by system calls for GAMS model solving — you will definitely benefit from switching to GAMSPy.\nHowever, GAMSPy seamlessly meets a much broader spectrum of needs. It is a perfect fit if you want to run arbitrary models, if your are looking for fast prototyping, or if you want to combine optimization and heuristics without switching environments.\nBut I thought flexibility is a double edged sword when it comes to performance? As we have discussed in a previous blog post that performance is a critical factor when dealing with data and optimization tasks. GAMSPy ensures you don\u0026rsquo;t have to compromise on performance. The heavy lifting – model compilation and execution – is seamlessly managed by the powerful GAMS system, ensuring a minimal overhead compared to using the standalone GAMS modeling language. Stay tuned as we will provide some performance insights in a future blog post.\nWhat do I need to do to try GAMSPy myself? GAMSPy can be easily installed via pip from the command line. Just follow the instructions from Social Media Post and/or our documentation . It is shipped with a mini GAMS installation and a demo license. Thus, no separate GAMS installation is required and you are only a single command away from diving into the GAMSPy experience.\nWhat can I expect from GAMSPy in the future? While we release GAMSPy in the beta phase, you can expect regular updates and bug fixes even though we have a significantly high test coverage. All the basic features are already included in this release but we are planning to provide a refined GAMS MIRO and a GAMS Engine integration. Apart from that, we love to hear from you. Feel free to contact gamspy@gams.com for questions, feedback, and feature requests.\n","excerpt":"Gain insights and answers to the most common questions related to the release of GAMSPy.","ref":"/blog/2023/11/gamspy-insights/","title":"GAMSPy Insights"},{"body":"GAMSPy combines the high-performance GAMS execution system with the flexible Python language, creating a powerful mathematical optimization package. It acts as a bridge between the expressive Python language and the robust GAMS system, allowing you to create complex mathematical models effortlessly.\nIn this section, we offer an overview of GAMSPy\u0026rsquo;s distinctive features and benefits to assist you in finding the ideal modeling language and environment for your needs.\nModel Instances vs. Mathematical Models Creating robust, readable, and maintainable models is an art rooted in algebraic formulation. The ability to express mathematical models in a language that retains the essence of algebraic notation and is machine-processable is paramount.\nWith this goal in mind GAMSPy has been developed to be able to generate mathematical models instead of model instances. Think of a mathematical model as a pure representation of mathematical symbols, devoid of specific data. In contrast, a model instance is the unrolled and constant folded representation of a model with its actual data. In a model instance sum expressions are resolved into their individual components and equation domains are resolved to individual scalar equations.\nMathematical Model $ \\sum_{i \\in \\mathcal{I}} \\frac{p_{i,j} - q_i}{a_j} \\cdot x_{i,j} \\le \\sum_{k \\in \\mathcal{K}} d_{k,j} \\hspace{1cm} \\forall j \\in \\mathcal{J} $\nModel Instance $ 5 \\cdot x_{i1,j1} + 3 \\cdot x_{i2,j1} + 2 \\cdot x_{i3,j1} \\le 7 $\n$ 2 \\cdot x_{i1,j2} + 6 \\cdot x_{i2,j2} + 4 \\cdot x_{i3,j2} \\le 10 $\nEspecially for complex models with many variables and equations, a model instance can become hard to manage. Therefore, GAMSPy leverages the idea of a standalone, data independent, and indexed representation of a mathematical model. This approach preserves the essence of the original algebraic formulation while ensuring manageability, even in the face of intricate complexities, e.g., complex mappings of indices to subsets.\nSparsity When delving into the intricacies of modeling languages, one key aspect of any modeling language is how it handles sparse multidimensional data structures. Many optimization problems are subject to a particular structure in which the data cube has a lot of zeros and only a few non-zeros, a characteristic referred to as sparsity. In optimization problems, it is often necessary to account for complex mappings of indices to subsets.\nWhile you might be used to taking on the full responsibility to make sure only the relevant combinations of indices go into your variable definition in the Python modeling world, we especially focused on transferring the convenience and mindset of GAMS into Python when designing GAMSPy. Thus, GAMSPy automatically takes care of generating variables only for the relevant combinations of indices based on the algebraic formulation. This feature is particularly useful when working with a large multidimensional index space, where generating all possible combinations of indices would be computationally expensive and unnecessary. GAMSPy quietly handles this task in the background, allowing you to focus on the formulation of the model.\nPerformance GAMSPy leverages the GAMS backend to execute assignment operations, generate and solve models. Since GAMS has been optimized over decades for exactly these tasks and comes with a broad set of state-of-the-art optimization solvers, it provides outstanding performance for model generation and solving models. This is the main source of the speed of GAMSPy.\nSee also: Performance in Optimization Models: A Comparative Analysis of GAMS, Pyomo, GurobiPy, and JuMP Optimization Pipeline Management Working on an optimization problem does not solely include the mathematical model but also includes tasks regarding data pre- and postprocessing as well as visualization. At GAMS, we prioritize making these tasks as comfortable and efficient as possible. With GAMSPy we provide a unique way to streamline the complete optimization pipeline starting with data input and preprocessing followed by the implementation of the mathematical model and data postprocessing and visualization, in a single, intuitive Python environment. GAMSPy allows you to leverage your favorite Python libraries (e.g. Numpy, Pandas, Networkx) to comfortably manipulate and visualize data. And it allows to import and export data and optimization results to many data formats.\nOn top, GAMSPy seamlessly works with GAMS MIRO and GAMS Engine which allows you to run your GAMSPy optimization either on your local machine or on your own server hardware (GAMS Engine One) as well as on GAMS Engine SaaS, hosted on the AWS cloud infrastructure. We make sure you have access to the right resources, any time.\nHow is GAMSPy different from GAMS? GAMS is a domain-specific declarative language that incorporates procedural elements from a general-purpose programming language, such as loops and conditional statements. In contrast, Python is a general-purpose programming language where these elements are already inherent. With the integration of the GAMSPy library, features like indexed assignment statements or the concise equation definition of domain-specific GAMS language are now made available in Python. This facilitates a seamless connection between the specialized modeling capabilities of GAMS and the flexibility and versatility of Python.\nSummary: The Benefits of GAMSPy Abstract algebraic data independent modelling Convenient handling of sparse data structures Control the whole optimisation pipeline from Python Much better performance with big models than alternative approaches Get a Glimpse of GAMSPy ","excerpt":"Learn how GAMSPy revolutionizes mathematical optimization by seamlessly integrating the power of Python\u0026rsquo;s expressive language with the efficiency of GAMS, offering a streamlined and high-performance solution for complex modeling challenges.","ref":"/blog/2023/11/introducing-gamspy/","title":"Introducing GAMSPy"},{"body":"The INFORMS Annual Meeting 2023 was held Phoenix and we\u0026rsquo;ve had an eventful weekend at our booth. Our workshop, presented by Atharv Bhosekar and Steven Dirkse, proved to be a success. We were delighted to see a diverse audience, ranging from newcomers to the GAMS framework to industry veterans. Their presence and engagement led to a productive session filled with intriguing questions. We appreciated the audience\u0026rsquo;s curiosity and professionalism.\nBesides our workshop, we brought another exciting presentation, this time focusing on how Python integrates with GAMS in an application pipeline. Atharv kicked off the presentation, delving into GAMS Transfer and embedded code, followed by a discussion about the Control API and GAMSPy, held by Steven. The audience displayed strong interest, particularly in the GAMSPy content.\nNew for us was our booth\u0026rsquo;s screen, which has also been a great asset, allowing us to showcase products like Studio and Miro and revisit key parts of our presentations, including GAMSPy. Additionally, our Pop-a-Shot competition has been a hit, drawing people to our booth and fostering engaging conversations with potential users. Overall, this conference has been a tremendous success for us, and we\u0026rsquo;re eagerly looking forward to the next opportunity to get together with colleagues from this exciting industry branch.\n\u0026times; Previous Next Close Our two presentations this time:\nThe Best of Both Worlds - Integrating Python and GAMS The Best of Both Worlds - Integrating Python and GAMS Presented by: Dr. Atharv Bhosekar \u0026amp; Dr. Steven Dirkse Optimization applications combine technology and expertise from many different areas, including model-building, algorithms, and data-handling. Often, the gathering, pre/post-processing, and visualization of the data is done by a diverse organization-spanning group that shares a common bond: their skill in and appreciation for Python and the vast array of available packages it provides. For this reason, GAMS offers multiple ways to integrate with Python on the data-handling side, as well as offering some packages of our own (e.g. GAMS Transfer, GAMS Connect). In this talk, we will explore the benefits of this integration and demonstrate them using a real-world example complete with results on performance.\nApplication-building with GAMS: Model Development, Data Transfer, and Deployment Application-building with GAMS: Model Development, Data Transfer, and Deployment Presented by: Dr. Atharv Bhosekar \u0026amp; Dr. Steven Dirkse The General Algebraic Modeling System (GAMS) allows modelers to create optimization-based decision support applications. In this workshop, our first focus will be on model development with GAMS. We will explore what a model entails, how to solve different problem types (linear, mixed-integer, non-linear) using GAMS, as well as how to switch solvers and separate the model code from input data using GDX. Additionally, we will demonstrate how a GAMS model can be integrated and transformed into an effective application. An essential step in this process is ensuring efficient data transfer. To achieve this, we will showcase the use of the embedded code facility, GAMS Transfer API, and tools like GAMS Connect. Lastly, we will introduce the GAMS Engine, a powerful tool for solving GAMS models either on-premises or in the cloud.\n","excerpt":"The INFORMS Annual Meeting 2023 was held Phoenix and we\u0026rsquo;ve had an eventful weekend at our booth. Have a look to our report and presentations held in Phoenix.","ref":"/blog/2023/10/gams-at-the-informs-2023-in-phoenix/","title":"GAMS at the INFORMS 2023 in Phoenix"},{"body":"\u003c!DOCTYPE html\u003e \u003c!DOCTYPE html\u003e GAMS for Academic Teachers and Students Facilitating Mathematical Optimization for Students, Faculty, and Researchers with GAMS: Whether applied in the classroom or harnessed for research, academic users can utilize GAMS at no cost and access all of GAMS's extensive features and powerful performance. For Students \u0026 Academics: Request a Community license Community License For students, academic and non-profit users Community licenses are for non-commercial, non-production use, and are a popular option for students at degree granting institutions, or for non-profit organizations. Generate and solve linear models (LP, MIP, and RMIP) that do not exceed 5000 variables and 5000 constraints For all other model types the model cannot be larger than 2500 variables and 2500 constraints Additional limits enforced by some solvers Limited to 12 months Cannot be combined with a professional license Visit our academic portal and signup with your institutional email address to request your license. For Instructors and Teachers: Request a Course license for your Students Course License For teachers and course lecturer to share with your students Academic Course licenses are designed for teachers, students and members at degree granting institutions. These licenses are for non-commercial, non-production use and only for academic and teaching purposes. To obtain your free course license, please contact sales@gams.com. Make sure to provide the following information in your email:\nName of the university Name of the department Name of the course instructor Name of the course Duration of the course Current GAMS license ID of the instructor (if applicable) Additional comments ","excerpt":"\u003c!DOCTYPE html\u003e\n\u003chtml lang=\"en\"\u003e\n\u003c!-- Girish Request Form Header --\u003e\n\u003c!DOCTYPE html\u003e\n\u003chtml lang=\"en\"\u003e\n \u003chead\u003e\n \u003cmeta charset=\"UTF-8\"\u003e\n \u003cmeta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\"\u003e\n \u003c!-- \u003ctitle\u003eForm Submission\u003c/title\u003e --\u003e\n \u003clink rel=\"stylesheet\" href=\"https://cdn.jsdelivr.net/npm/bootstrap@5.0/dist/css/bootstrap.min.css\"\u003e\n \u003cscript src=\"https://cdnjs.cloudflare.com/ajax/libs/jquery/3.7.1/jquery.min.js\" integrity=\"sha512-v2CJ7UaYy4JwqLDIrZUI/4hqeoQieOmAZNXBeQyjo21dadnwR+8ZaIJVT8EE2iyI61OV8e6M8PP2/4hpQINQ/g==\" crossorigin=\"anonymous\" referrerpolicy=\"no-referrer\"\u003e\u003c/script\u003e\n \u003c!--\u003cstyle\u003e\n form { border: 1px solid #ccc; border-radius: 10px; padding: 10px; }\n \u003c/style\u003e --\u003e\n \u003c/head\u003e\n\u003c!-- Girish Request Form Header END--\u003e\n\n\u003cbody\u003e\n\u003csection\u003e\n \u003cdiv class=\"full-width \"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid\"\u003e\n \u003cdiv class=\"container text-left\" \u003e \u003c!-- Container für Alles --\u003e\n \u003c!-- Headline Section --\u003e\n \u003csection\u003e\n \u003ch1 class=\"display-4\"\u003e\n GAMS for Academic Teachers and Students\n \u003c/h1\u003e\n \u003cp class=\"lead\"\u003e\n Facilitating Mathematical Optimization for Students, Faculty,\n and Researchers with GAMS:\n \u003cbr\u003e\n Whether applied in the classroom or harnessed for research, academic users can utilize\n GAMS at no cost and access all of GAMS's extensive features and powerful performance.\n \u003c/p\u003e","ref":"/trygams/academic_trial/","title":"Academic Trial License"},{"body":" Try before you buy Evaluate our software before making a purchase. We offer the following options: Time Limited Evaluation License For commercial users Our non-academic customers can request time limited evaluation licenses for the purpose of testing GAMS under real world conditions. No functional restrictions Use GAMS for up to 30 days free of charge Try different solvers before you purchase No production use Contact our sales team at sales@gams.com to request an evaluation license. ","excerpt":"\u003csection\u003e\n \n \n \u003cdiv class=\"full-width \"\u003e\n \n \u003cdiv class=\"jumbotron jumbotron-fluid\"\u003e\n \n \u003cdiv class=\"container text-left\"\u003e\n \n \u003ch1 class=\"display-4\"\u003eTry before you buy\u003c/h1\u003e\n \u003cp class=\"lead\"\u003e\n Evaluate our software before making a purchase. \n We offer the following options:\n \u003c/p\u003e\n \n\n\n \u003cdiv class=\"card shadow mt-5\"\u003e\n \u003cdiv class=\"card-body\"\u003e\n \u003ch3 class=\"card-title section-subtitle\"\u003eTime Limited Evaluation License\u003c/h3\u003e\n \u003cdiv class=\"text-muted mb-3\"\u003e\n For commercial users\n \u003c/div\u003e\n \u003cp class=\"card-text\"\u003e\n Our \u003cstrong\u003enon-academic customers\u003c/strong\u003e can request time limited evaluation licenses for the purpose of testing GAMS under real world conditions. \n \u003cul\u003e\n \u003cli\u003eNo functional restrictions\u003c/li\u003e\n \u003cli\u003eUse GAMS for up to 30 days free of charge\u003c/li\u003e\n \u003cli\u003eTry different solvers before you purchase\u003c/li\u003e\n \u003cli\u003eNo production use\u003c/li\u003e\n \u003c/ul\u003e\n \u003cstrong\u003eContact our sales team at \u003ca href=\"mailto:sales@gams.com\"\u003esales@gams.com\u003c/a\u003e to request an evaluation license.\u003c/strong\u003e\n \u003c/p\u003e","ref":"/trygams/commercial_trial/","title":"Commercial Trial License"},{"body":"","excerpt":"","ref":"/trygams/","title":"Trygams"},{"body":"This year\u0026rsquo;s annual conference of the Society for Operations Research (GOR e.V.) took place in Hamburg from August 29 to September 1. The contributions revolved around the topics of Decision Support and Choice-Based Analytics for a disruptive World.\nThis year we sent a slightly larger team to the conference to meet colleagues, learn new things and gather experiences. Our team was scheduled to give several presentations. Therefore, we have uploaded the abstracts of each presentation again for your viewing.\nMany thanks to all who were involved in organizing this great conference.\nWe are looking forward to seeing everyone again next year in Munich!\n\u0026times; Previous Next Close Our four presentations this time:\nYou can find the presentation slides further below.\nScalable Optimization in the Cloud with GAMS and GAMS Engine Scalable Optimization in the Cloud with GAMS and GAMS Engine by Stefan Mann, Frederik Proske GAMS Engine SaaS is a cloud-based service that allows users to run GAMS jobs on a scalable and flexible infrastructure, currently provided by Amazon Web Services (AWS). It was launched in early 2022 and has since attracted a variety of customers who benefit from its features, such as horizontal auto-scaling, instance sizing, zero maintenance, and simplified license handling. GAMS Engine SaaS is especially suitable for workloads that require large amounts of compute power and can be adapted to many different scenarios. In this presentation, we show a case study of a large international consultant agency that uses GAMS Engine SaaS to run Monte-Carlo simulations of a large energy system model in response to varying climate change scenarios. We describe how they leverage the GAMS Engine API to submit and monitor their jobs, how they select the appropriate instance type for each job, and how they can use custom non-GAMS code on Engine SaaS. We also discuss the challenges and benefits of using GAMS Engine SaaS for this type of application, and provide some insights into the future development of the service.\n\u0026laquo; Due to confidentiality issues, we can not publish the slides! \u0026raquo;\nA tour of the GAMS ecosystem in 2023 A tour of the GAMS ecosystem in 2023 Andre Schnabel The GAMS ecosystem surrounding its modeling language has significantly evolved in recent years. There have been major additions like GAMS MIRO, GAMS Transfer, GAMS Engine, and a vastly more modern integrated development environment with GAMS Studio. MIRO allows users to rapidly obtain an interactive web application frontend for a GAMS model with extensive visualization options for the end-user. Transfer makes working with data seamless and more natural in languages such as Python and Matlab. Engine facilitates running GAMS jobs in a cloud environment and thus making even hard problems tractable. Studio is a powerful IDE for GAMS with tight integrations to MIRO and Engine. This talk will give a tour through these new tools and show small examples of their application.\nFurthermore, the talk will present the Connect framework. Connect is inspired by the ETL (extract, transform, load) paradigm for integrating data from various sources via agents. With Connect, the user can read data in many different representations into the Connect database. The data is then potentially transformed before being exported into a file format of choice. This approach is general and will simplify migrating data between different formats with optional processing as part of an optimization pipeline built around a GAMS model. We take a short look at how the modern GAMS ecosystem can be used to create a small web application to determine the shortest hiking tour that collects all hiking awards in the Harz mountains in central Germany.\nGetting the Best of Both Worlds - GAMS and Python Getting the Best of Both Worlds - Ways to Combine Python’s Flexibility with a Domain Specific Modeling Language in Applied Operations Research by Justine Broihan Applied mathematical optimization involves solving complex realworld problems and it is not always only the model itself that has to be taken care of. Applied Operations Research also requires careful handling and visualization of data in both the development and the production stage. However, data manipulation, preprocessing, and postprocessing can be time-consuming and tedious, especially when working with a domain-specific modeling language such as GAMS. In this talk, we aim to explore how the flexibility of Python and its vast array of available packages can be combined with the efficiency of a domainspecific programming language like GAMS. We will demonstrate this integration using a real-world example and present results on performance.\nOptimizing Agriculture in the Cloud Optimizing Agriculture in the Cloud: A Real-Life Example of Model Deployment with GAMS MIRO by Robin Schuchmann With the evolution of software and hardware, the way optimization software is used has changed significantly. Today, users prefer logging into online services to perform their optimization on centralized compute resources. These trends do not stop at GAMS, and in recent years, various new developments have been initiated to meet the changing requirements. In this talk, a real-life example in cooperation with the Leibniz Centre for Agricultural Landscape Research (ZALF) is used to show what a modern software solution with GAMS looks like. We will explore the deployment of a GAMS model using GAMS MIRO, a powerful tool for creating a graphical user interface that can be run in cloud environments. The model (Multi Objective Decision support tool for Agro ecosystem Management - MODAM) is a Bioeconomic whole farm model that supports agricultural land users in decision making on optimal resource allocation. It generates optimal production patterns and a number of economic and ecological indicators. From modeling, to visualization, to integration into existing IT infrastructure - realworld applications like these have many aspects. We will discuss how to efficiently organize projects and work around obstacles during the continuous development process by adapting to changing requirements until the application achieves its intended goals.\nCheck our presentation slides for more information:\nName: Size / byte: Getting the Best of Both Worlds-Broihan.pdf 2335079 Optimizing Agriculture in the Cloud_Schuchmann.pdf 3724042 TourDeGAMS-Schnabel.pdf 16035573 ","excerpt":"\u003cp\u003eThis year\u0026rsquo;s annual conference of the Society for Operations Research (GOR e.V.) took place in Hamburg from August 29 to September 1.\nThe contributions revolved around the topics of Decision Support and Choice-Based Analytics for a disruptive World.\u003c/p\u003e\n\u003cp\u003eThis year we sent a slightly larger team to the conference to meet colleagues, learn new things and gather experiences.\nOur team was scheduled to give several presentations. Therefore, we have uploaded the abstracts of each presentation again for your viewing.\u003c/p\u003e","ref":"/blog/2023/09/gams-at-the-or2023-in-hamburg/","title":"GAMS at the OR2023 in Hamburg"},{"body":"","excerpt":"","ref":"/authors/cgouel/","title":"Christophe Gouel"},{"body":" About the author\nChristophe Gouel is an academic economist and senior research fellow at INRAE in the Paris-Sacly Applied Economics research unit.\nEmail: christophe.gouel@inrae.fr Are you ready to transform your GAMS modeling class with modern tools and streamline grading? In this blog post, I\u0026rsquo;ll show you how I have recently adjusted my own workflow.\nAs a teacher of a master-level course on General Equilibrium Modeling, I know the importance of practice for students to master the subject. I give many exercises to students, some of which they do in class with my help, and others they do on their own and submit for grading. However, receiving over 20 programs to check for each session was daunting. In this blog post, I will show you how I turned this dreaded work into a streamlined, efficient process.\nTo achieve this, I use GitHub Classroom , which is a free service provided by GitHub for teachers to manage assignments using GitHub repositories. I also use continuous integration workflow in GitHub to automatize the execution of GAMS programs and cloud-based development tools for providing feedbacks.\nTo make this concrete, I have created a public repository with an introductory exercise that I use in my class as an example. You can find it here: https://github.com/economic-modeling-master/partial-eq-1-sector .\nGitHub Classroom Using GitHub Classroom, I can manage all assignments and have access to a dashboard that presents each class, assignment, and student repository. When I send out an assignment, students are given a link that creates a repository copied from a target repo for them to submit their solutions. GitHub Classroom makes grading assignments more efficient, as it allows teachers to see all students\u0026rsquo; repositories for a particular assignment and check whether they have submitted their work before the deadline. Assignments can be done individually or in group, and in the latter case one repo is created per group.\nFrom the students\u0026rsquo; perspective, using GitHub Classroom just requires a GitHub account, but no installation or knowledge of Git. Without using Git, they can submit their assignments simply by uploading files manually as on any other website, in which case a commit is automatically created. So it is not limited to computer science classes and students skilled enough to learn GAMS can use it without trouble.\nGitHub Classroom includes features to automatize grading. For example, by running unit tests on students code, but this does not seem adapted to GAMS programs. However, I build on similar tools to automatize the run of students\u0026rsquo; solutions.\nAutomatic run of students solutions Once all students have committed their solutions to their respective repositories, I use a continuous integration workflow in GitHub to run all their GAMS programs without having to do it manually. After each student commit, a virtual machine is launched with a fresh GAMS install to run all .gms files present in the repository. The output files (gdx, log, and lst) are saved in a zip file for me to check as shown in this gif:\nIf their gms file does not compile, instead of a green checkmark (✓) indicating compilation, there is a red cross mark (❌). In this case, I know that I have to check their code to find the mistake, which I can also do in the cloud using cloud-based development tools.\nThe automatic execution of GAMS is triggered by having a YAML file with the correct instructions in each repository. You can find an example of such a YAML file in workflow.yml , which makes use of the official GAMS docker image , which has been made available with GAMS 44.\nClick for details of workflow.yml name: Test model solution with GAMS on: [push] jobs: build: runs-on: ubuntu-latest container: gams/gams:latest steps: - name: Checkout uses: actions/checkout@v3 - name: Run GAMS run: | cd $GITHUB_WORKSPACE for gmsfile in *.gms do gams \u0026#34;${gmsfile}\u0026#34; lo=4 gdx=\u0026#34;${gmsfile/.gms/}\u0026#34; cat \u0026#34;${gmsfile/gms/lst}\u0026#34; done shell: bash - name: Archive results uses: actions/upload-artifact@v3 with: name: gams-results-files path: | ./*.lst ./*.log ./*.gdx Fixing students errors and providing feedbacks In case of errors in students\u0026rsquo; code, I use GitHub\u0026rsquo;s Pull Request interface to propose solutions and provide feedbacks. The Pull Request interface allows me to comment on code line by line, which is perfect for fixing minor errors.\nFor more complex errors, it might be necessary to change the code and launch GAMS to check the new solution. I could download the code to modify it on my computer before uploading back the corrected version, but this would add a lot of frictions. Instead, I am relying on Codespaces which allows me to start a virtual machine in the cloud. The difference with the previous virtual machine that automatically launched GAMS is that Codespaces provides a persistent machine with an editor (Visual Studio Code for the Web), a terminal to launch GAMS, and a link to the original repo to push back modifications (contrary to what the gif below may suggest setting up the Codespaces takes about 2 minutes, during which I jump to another project to grade).\nHow to deal with licensing? GAMS requires a license to run. But, with GAMS version 44.0 , a demo license is included back in the GAMS distribution. It is valid for approximately 5 months, which is enough given the frequent GAMS releases, as long as one uses the latest GAMS distribution. For the purpose of this class, where the students have to solve small models, relying on the demo license with its limits is normally sufficient, but this may not be the case for everyone. In those cases, there is also the option to get a community license (details on this page), which allows solving of larger problems and is available for free for students.\nIf you need to use a license file, I can see at least two approaches. The first would be to store a license file in each repository of assignments and to move it automatically to where GAMS is installed on the virtual machine. Since, for my class, the repositories for exercises are all private repositories, the license would not be shared outside the class and each year, GAMS provides me with a temporary teaching license for my students. For public projects, storing in this way the license file is not an option. In this case, the license can be stored in a GitHub secret and copied to GAMS folder after the installation.\nAdditional benefits Even if this setup does not require the students to learn how to use Git, it has the benefit of familiarizing them with modern development tools: GitHub, Markdown, Continuous Integration, and even development in the cloud; all skills that can be useful for modelers in and out of academia.\nAnother benefit of this approach is that it can be scaled up. Running the class with a hundred students would not be more difficult. It is possible to share the teacher access to GitHub Classroom with teaching assistants who would take care of part of the load.\nIn conclusion, by using modern tools, you can transform the way you teach GAMS modeling and make grading more efficient and partly automated.\n","excerpt":"Are you ready to transform your GAMS modeling class with modern tools and streamline grading? In this blog post, our guest author Christophe Gouel from INRAE explains how he adjusted his workflow.","ref":"/blog/2023/08/modern-gams-teaching-gams-with-github-classroom/","title":"Modern GAMS teaching - GAMS with GitHub Classroom"},{"body":"Introduction In today\u0026rsquo;s fast-paced business world, decision-makers face increasingly complex challenges that require sophisticated optimization techniques. With so many modeling languages and frameworks available, it can be difficult to decide which one is the best fit for your specific needs.\nWe believe that GAMS stands out as a powerful tool for modeling and solving optimization problems. However, we also understand that our clients have many questions about why they should choose GAMS over other popular modeling frameworks such as Pyomo, GurobiPy, or JuMP.\nWhat this post is about In this blog post, we\u0026rsquo;ll provide a comparison of GAMS with these other modeling frameworks, highlighting some of the unique benefits and features that set GAMS apart from the competition. Whether you\u0026rsquo;re a business leader or a researcher, we\u0026rsquo;re confident that this post will help you understand why GAMS is a smart choice for modeling and solving complex optimization problems.\nOne key aspect of any modeling language is how it handles sparse multidimensional data structures. In this blog post, we\u0026rsquo;ll explore how each of the modeling frameworks - Pyomo, GurobiPy, JuMP, and GAMS - approach this issue and how each approach performs.\nIt\u0026rsquo;s worth noting that each of these modeling languages is a capable and successful tool for modeling and solving optimization problems. However, by looking closely at their individual strengths and weaknesses, we hope to provide our users with a better understanding of how GAMS stands out as a powerful and versatile modeling language.\nProblem Characteristics we face in Operations Research Many optimization problems are subject to a particular structure in which the data cube has a lot of zeros and only a few non-zeros, a characteristic referred to as sparse. In optimization problems, it is often necessary to account for complex mappings of indices to subsets. For example, in the complex world of supply chain management, the sets of products, production units, production plants, distribution centers, and customers are the building blocks of a model. The uniqueness of each product often dictates that only a subset of production units possess the capability to manufacture it. Moreover, at any given production plant, only a subset of all available production units may be accessible. Adding to the complexity, the location of a production plant further narrows down the options for distribution centers, as only a specific subset would be considered suitable for storing products. And finally, only certain distribution centers are considered to deliver products to the customers.\nSuch a typical structure is best represented by maps. We continue to work with an abstraction (sets $\\mathcal{I}$, $\\mathcal{J}$, $\\mathcal{K}$, $\\mathcal{L}$, and $\\mathcal{M}$) rather than a real world example to focus on the core of the sparsity issue. For example, a map $\\mathcal{IJK}$ is a subset of the Cartesian product $\\mathcal{I} \\times \\mathcal{J} \\times \\mathcal{K}$. Sparsity of $\\mathcal{IJK}$ means that $|\\mathcal{IJK}|\\ll|\\mathcal{I} \\times \\mathcal{J} \\times \\mathcal{K}|$. Unlike dictionaries where key and value are clearly specified, maps can be used with different key (combinations). So in one situation, the map provides the set elements $j$ and $k$ for a given $i$, in other situations the same map provides set elements $i$ for a given $j$ and $k$. We\u0026rsquo;ll explore how each of the modeling languages we\u0026rsquo;ve mentioned handles this sparse structure and complex mappings of indices, and how they perform with growing problem size with the following abstract and partial but typically structured model algebra:\n$$\\min F = 1 $$\n$$\\sum_{(j,k):(i,j,k) \\in \\mathcal{IJK}} \\ \\sum_{l:(j,k,l) \\in \\mathcal{JKL}} \\ \\sum_{m:(k,l,m) \\in \\mathcal{KLM}} x_{i,j,k,l,m} \\ge 0 \\hspace{1cm} \\forall \\ i \\in \\mathcal{I}$$\n$$x_{i,j,k,l,m} \\ge 0 \\hspace{1cm} \\forall \\ (i,j,k) \\in \\mathcal{IJK}, l:(j,k,l) \\in \\mathcal{JKL}, m:(k,l,m) \\in \\mathcal{KLM} $$\nGeneral-Purpose Programming Language vs Domain Specific Modeling Languages One of the key differences between GAMS and the other modeling frameworks we\u0026rsquo;ve mentioned is that GAMS is a domain-specific language, whereas the rest are based on general-purpose programming languages. This means that GAMS is specifically designed to represent algebraic models in a way that computers can execute the model and users are still able to easily read the model. In contrast, modeling frameworks like Pyomo, JuMP, and GurobiPy rely on general programming conventions to represent algebraic models as effectively as possible. Thanks to its domain-specific language, GAMS allows us to write the model using only a few lines of code that are very similar to the algebraic formulation.\nIf we use Pyomo to represent the same model, we end up with considerably more lines of code and nested operations. This difference can have a significant impact on the readability of the model and thus on the time and effort required to develop, test, and maintain complex optimization models. An excellent illustration of the simplicity and user-friendliness of GAMS as a modeling language can be found by comparing the variable declaration ($x_{i,j,k,l,m}$) syntax in GAMS with Pyomo\u0026rsquo;s syntax.\nmodel.x = pyo.Var( [ (i, j, k, l, m) for (i, j, k) in model.IJK for (jj, kk, l) in model.JKL if (jj == j) and (kk == k) for (kkk, ll, m) in model.KLM if (kkk == k) and (ll == l) ], domain=pyo.NonNegativeReals, ) vs\nPositive variable x(i,j,k,l,m); The reason for this significant difference in variable declaration between Pyomo and GAMS is that GAMS automatically takes care of generating variables only for the relevant combinations of indices based on the algebraic formulation, while in Pyomo, we need to carefully define the relevant combinations of variable indices ourselves. This feature is particularly useful when working with a large multidimensional index space, where generating all possible combinations of indices would be computationally expensive and unnecessary. GAMS quietly handles this task in the background, allowing us to focus on the formulation of the model.\nIsn\u0026rsquo;t more flexibility always better? Figure 1 presents the time required to generate a model instance for the exemplary mathematical model with Pyomo, as a function of problem size, specifically the growing size of set $\\mathcal{I}$. The figure demonstrates the model generation time for an implementation of the model where all variables are defined as the Cartesian product of all indices without carefully defining only the relevant variables. In contrast, Figure 1 also presents the model generation time for an implementation where only relevant variables are carefully defined. We report the minimum model generation time across multiple runs for every problem size data point. Comparing the model generation time for the two implementations emphasizes the importance of precise variable definition in mathematical optimization. The model generation time for the Cartesian product implementation increases drastically with problem size, while the carefully defined variable implementation maintains a moderate model generation time.\nFigure 1. Model generation time for Cartesian and carefully defined variable definitions.\nBut General-Purpose Programming Languages, like Python, are so much more flexible than Domain Specific Languages like GAMS While it\u0026rsquo;s true that general-purpose programming languages offer more flexibility and control, it\u0026rsquo;s important to consider the trade-offs. With general-purpose languages like Python and Julia, a straightforward implementation closely aligned with the mathematical formulation is often self-evident and easier to implement, read, and maintain, but suffers from inadequate performance. In attempting to balancing the simplicity of code representing a mathematical model and its performance, trade-offs become necessary.\nAs we\u0026rsquo;ve seen in the example of GAMS versus Pyomo, having more control sometimes means taking on more responsibility, such as manually generating only relevant variables. Flexibility is also a double-edged sword. While it offers many different ways to accomplish a task, there is also the risk of implementing a solution that is not efficient. And determining the optimal approach is a challenging task in itself. All of the discussed modeling frameworks allow a more or less and depending on personal taste intuitive implementation of our example\u0026rsquo;s model. However, intuitive solutions do not always turn out to be efficient. With additional research and effort, it is possible to find alternative implementations that outperform the intuitive approach, as Figure 2 presents for JuMP.\nFigure 2. Compared model generation times for an low and high performing implementation in JuMP.\nWe see the same principle that applies to JuMP when it comes to modeling frameworks like GurobiPy and Pyomo with the underlying language Python. Figure 3 outlines the difference in intuitive versus optimized data structures for model representation with Pyomo and GurobiPy.\nFigure 3. Compared model generation times for an low and high performing implementation in GurobiPy and Pyomo. As we\u0026rsquo;ve seen in Figures 2 and 3 with our Pyomo, JuMP, and GurobiPy examples, relying solely on intuition and a straightforward implementation may not always lead to the most efficient solution. It takes a significant amount of research and effort to find ways to optimize the model representation and improve its performance with respect to model generation time. While model and parameter tuning is always part of optimization model development it should not obscure the mathematical foundation and should not be in the way of maintenance and further development. Especially during the early development phase, it is crucial to have the ability to easily evaluate changes to the model and engage in rapid prototyping without the cumbersome burden of tuning with each modification. This is where domain-specific languages like GAMS shine. GAMS provides with its specialized syntax the bases for built-in code optimization that in modeling frameworks built on top of a general-purpose programming language needs to be manually performed by the user.\nWhat about performance? Now that we have discussed how those languages are used, let\u0026rsquo;s compare their performances. In order to evaluate the performance of GAMS, Pyomo, GurobiPy, and JuMP, we conduct a study using generated dataset for sets $\\mathcal{I}$, $\\mathcal{J}$, $\\mathcal{K}$, $\\mathcal{L}$, and $\\mathcal{M}$ with a cardinality of $N$ for set $\\mathcal{I}$ and cardinality of $O$ for sets $\\mathcal{J}$, $\\mathcal{K}$, $\\mathcal{L}$, and $\\mathcal{M}$. Random multidimensional sets $\\mathcal{IJK}$, $\\mathcal{JKL}$, and $\\mathcal{KLM}$ are generated with a 5% chance of $(i,j,k)$, $(j,k,l)$, and $(k,l,m)$ being in $\\mathcal{IJK}$, $\\mathcal{JKL}$, $\\mathcal{KLM}$, respectively, for the full Cartesian product of $\\mathcal{I}\\times\\mathcal{J}\\times\\mathcal{K}$, $\\mathcal{J}\\times\\mathcal{K}\\times\\mathcal{L}$, and $\\mathcal{K}\\times\\mathcal{L}\\times\\mathcal{M}$. As we increase $N$, the cardinality of $\\mathcal{I}$ and $\\mathcal{IJK}$ also increases. We measure the time that each language needs to generate and solve with a solver time limit of zero seconds the presented model, with each language working with the same data set. We report the minimum model generation time achieved by each language across a statistically relevant number of runs per data set.\nFigure 4. Performance comparison of the high performance implementations in Pyomo, GurobiPy, JuMP, and GAMS. The left plot shows model generation time only while the right plot shows model generation and solving with a solver time limit of zero seconds. Thus, the right plot takes also account for passing the model instance to the solver and retrieving the solution. According to the obtained results in Figure 4, there is a huge difference in performance for the introduced languages. While we observe linear growth for GurobiPy, JuMP, and Pyomo, the model generation time for GAMS does not increase as significantly.\nBut why does GAMS outperform Pyomo, JuMP, and GurobiPy? The reason for GAMS superior performance in this example is the use of relational algebra. While other optimization software such as Pyomo, GurobiPy, and JuMP require iterating over all elements of a set, GAMS uses relational algebra to process complex queries efficiently. Relational algebra originates from database theory and allows complex database queries to be processed very efficiently without the need to iterate over all database entries. As a result, GAMS can handle large-scale optimization problems more efficiently and effectively than many other modeling languages.\nBut why do I care about model generation? When it comes to mathematical optimization, the significance of model generation time should not be underestimated. While it may be tempting to solely focus on the performance of the solver, the truth is that the time taken for model generation can greatly impact the overall efficiency and practicality of the optimization process. Admittedly, if the solver consumes the majority of the time, the relevance of model generation diminishes to some extent. However, feedback from our customers consistently emphasize the importance of considering performance beyond solving alone. A notable example that highlights this perspective is conveyed by Abhijit Bora, who shared PROS Inc. experience in transferring their optimization model to GAMS.\nOur decision to reimplement our large optimization model using GAMS has yielded exceptional results, surpassing all previous modeling technologies. With an extraordinary improvement of 300%, this accomplishment holds immense significance for us. It unlocks the door to unprecedented possibilities in optimization. We can now embrace a 360-day horizon, a long-desired feature that was once deemed unattainable but has become a reality.\n\u0026ndash; Abhijit Bora, Senior Principal Software Engineer at PROS Inc.\nOther recent research also focuses on efficiency of model generation. The paper Linopy: Linear optimization with n-dimensional labeled variables1 compares some open source modeling frameworks but unfortunately chose a dense model (the knapsack problem) as a benchmark problem. In our experience dense models are extremely rare in practice and don’t really represent a challenge with respect to model generation. The ongoing study Computational Performance of Algebraic Modeling Languages Under Practical Use Cases at Carnegie Mellon University2 also analyzes different modeling frameworks with an emphasis on model generation performance.\nLet\u0026rsquo;s summarize GAMS has proven to be a powerful optimization tool due to its use of relational algebra and efficient handling of complex variable definitions. Its mathematical notation also allows for intuitive and readable model implementations. While the choice between a domain-specific language like GAMS and a general-purpose programming language package like Pyomo ultimately depends on the problem at hand, it\u0026rsquo;s important to consider the trade-offs between control, flexibility, and efficiency. With careful consideration and effort, either type of language can achieve optimal results.\nFor anyone interested you can find the full code used for this analysis in our GitHub repository.\nThis post has been updated on July 13, 2023. A previous version of this post included content (quotes from public forums3 4 and an acknowledgement) that may have conveyed the false impression that JuMP and Pyomo developers collaborated in creating this blog post or approved of its content. We want to clarify that this was not the case and have since removed this content. Hofmann, F., (2023). Linopy: Linear optimization with n-dimensional labeled variables. Journal of Open Source Software, 8(84), 4823, https://doi.org/10.21105/joss.04823 \u0026#160;\u0026#x21a9;\u0026#xfe0e;\nKompalli, S., Merakli, M., Ammari, B. L., Qian, Y., Pulsipher, J. L., Bynum, M., Furman, K. C., Laird, C. D. (2023). Computational Performance of Algebraic Modeling Languages Under Practical Use Cases. Carnegie Mellon University, Annual Review Meeting, March 6-7, 2023.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nhttps://discourse.julialang.org/t/performance-julia-jump-vs-python-pyomo/92044 \u0026#160;\u0026#x21a9;\u0026#xfe0e;\nhttps://stackoverflow.com/questions/76324121/is-there-a-more-efficient-implementation-for-pyomo-models-compared-to-the-curren \u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","excerpt":"In this blog post, we’ll provide a detailed comparison of GAMS with other popular modeling frameworks such as Pyomo, GurobiPy, or JuMP, highlighting some of the unique benefits and features that set GAMS apart from the competition.","ref":"/blog/2023/07/performance-in-optimization-models-a-comparative-analysis-of-gams-pyomo-gurobipy-and-jump/","title":"Performance in Optimization Models: A Comparative Analysis of GAMS, Pyomo, GurobiPy, and JuMP"},{"body":" As a company, GAMS has always been committed to developing solutions that meet our clients\u0026rsquo; needs. Since our founding in 1987, we have focused on creating tools that enable our clients to solve complex optimization problems efficiently. Our long history of customer-oriented development has led to the creation of innovative products such as GAMS MIRO and GAMS Engine.\nOne of the earliest examples of our client-focused approach came in the form of a request from Shell, one of our clients, in the late nineties. They wanted to develop a GAMS model for scheduling three of their oil refineries, but the current GAMS version at that time did not allow for the necessary nonlinear computations. Working closely with Shell, we identified their needs and developed the required functionality. This feature was then incorporated into the next GAMS release, making it accessible to the wider GAMS community.\nOur success with Shell was just the beginning. As we continued to work with clients, we identified new demands for additional functionality. We responded to these demands by developing new products such as GAMS MIRO and GAMS Engine, inspired by collaborations with clients such as the World Bank, the OECD, and Engie. These products were designed to meet specific needs, such as a GUI generator for GAMS models or a convenient solution to run GAMS in the cloud, and have become integral parts of our offering.\nToday, we continue to evolve our services to meet the changing needs of our clients. In recent years, we have noticed that clients are reaching out to us with requests that go beyond the scope of technical support. In response to the growing demand, we are delighted to unveil our new line of GAMS Consulting Services.\nOur GAMS Consulting Services are designed to provide premium tailor-made solutions to our clients. Our services include Model Development, Model Improvement, Model Deployment, and Training \u0026amp; Workshops. With our extensive experience in optimization modeling, we offer a range of specialized services to help our clients achieve their goals. To learn more about our GAMS Consulting Services and how we can help you optimize your operations and achieve your business objectives, please visit www.gams.com/consulting .\nAt GAMS, we believe that our success is directly tied to the success of our clients. We will continue to focus on customer-oriented development and providing the highest quality products and services to meet your needs. We look forward to working with you to help solve your optimization problems.\n","excerpt":"GAMS introduces Consulting Services, providing tailored solutions in Model Development, Improvement, Deployment, and Training. With a focus on customer success, GAMS continues to deliver high-quality products and services, helping clients optimize operations and achieve business objectives.","ref":"/blog/2023/06/gams-consulting-services-a-legacy-of-customer-oriented-development/","title":"GAMS Consulting Services: A Legacy of Customer-Oriented Development"},{"body":"Introduction In the ever-evolving landscape of the internet, websites come and go, reflecting the changing needs and priorities of their users. After two decades of serving as a hub for mathematical programming enthusiasts, the time has come to bid adieu to GAMS World . Founded with the noble vision of bridging the gap between academia and industry, this website made significant contributions in its prime, fostering the exchange of specialized knowledge and promoting collaboration. In this article, we reminisce about the purpose and achievements of GAMS World, explore its impact on the field of mathematical optimization, and understand the reasons behind its decommissioning.\nBridging the Gap: The Vision of GAMS World Launched 20 years ago, GAMS World aimed to address the challenges faced in the application of mathematical programming to real-world problems. Despite the significant advancements in algebra-based modeling systems, algorithms, and computer codes, the practical implementation of these tools fell short of expectations. The website recognized the need for specialized knowledge and domain-specific expertise in translating real-world problems into effective operational systems. It aimed to collect and disseminate such knowledge outside the established channels, in turn facilitating better collaboration between academia and industry.\nDisseminating Domain-Specific Information One of the key objectives of GAMS World was to provide a platform for the dissemination of domain-specific information that often remained inaccessible due to its unique content or form. While model structures and results were frequently published in academic and commercial papers, reproducing or utilizing these findings in different studies proved challenging. GAMS World sought to overcome these obstacles by offering well-focused and maintained services, enabling researchers and practitioners to share problems, solutions, and data. By doing so, the website played a vital role in encouraging the development and refinement of optimization algorithms.\nModel Libraries: Testing Optimization Software A significant contribution of GAMS World was its publication of model libraries, serving as valuable resources for testing optimization software. Two decades ago, software quality assurance (SQA) was an overlooked aspect in the realm of mathematical optimization software. The website played a pivotal role in raising awareness about the importance of SQA and its impact on the reliability and effectiveness of optimization systems. GAMS World\u0026rsquo;s model libraries provided a wide range of test cases and models, helping developers refine their software implementations. It also advocated for a common format, known as \u0026ldquo;scalar models,\u0026rdquo; facilitating seamless translation between different software systems and enabling collaborative benefits among competing platforms.\nThe Changing Landscape and Farewell As the years passed, the field of optimization software underwent significant transformation. The importance of software quality assurance became well-established, and abundant models and data became readily available. The heyday of GAMS World gradually receded, signaling the need for change. Additionally, the website\u0026rsquo;s design, reminiscent of the Y2K era, became outdated, lacking the modern appeal expected by users in today\u0026rsquo;s digital age. These factors, combined with the evolving landscape of mathematical optimization, ultimately led to the decision to decommission the GAMS World website. Additionally, it\u0026rsquo;s worth mentioning that efforts were made to salvage and preserve the still valuable collection of models that had accumulated over the years. Recognizing their enduring significance, GAMS World\u0026rsquo;s legacy lives on through the publication of these models on GitHub under the repository name GAMS-dev/gamsworld . By making these models freely available to the optimization community, GAMS-dev ensures that the wealth of knowledge and the practical applications contained within them can continue to benefit researchers, developers, and practitioners worldwide. This act of preserving and sharing the models on a widely accessible platform reflects the spirit of collaboration and knowledge sharing that GAMS World championed throughout its existence.\nGratitude and Farewell As we bid farewell to GAMS World, it is essential to express gratitude to the countless visitors, contributors, and editorial members who shaped and nurtured this remarkable platform over the years. Their collective efforts fostered collaboration, disseminated valuable knowledge, and promoted advancements in the field of mathematical programming. GAMS World will always hold a special place in the memories of those who found inspiration and guidance within its virtual walls.\n","excerpt":"In the ever-evolving landscape of the internet, websites come and go, reflecting the changing needs and priorities of their users. After two decades of serving as a hub for mathematical programming enthusiasts, the time has come to bid adieu to GAMS World.","ref":"/blog/2023/06/farewell-to-gams-world-a-bridge-between-academia-and-industry/","title":"Farewell to GAMS World - A Bridge between Academia and Industry"},{"body":"","excerpt":"","ref":"/authors/jharou/","title":"Julien Harou"},{"body":"","excerpt":"","ref":"/authors/mbasheer/","title":"Mohammed Basheer"},{"body":" Introduction Can GAMS mathematical models be coupled with Artificial intelligence (AI) to find solutions to complex problems? This blog post tackles this question and provides some starter code and application examples.\nMathematical programmes built in GAMS are good at finding optimal values for endogenous variables for problems with single scenarios. For an economy-wide CGE model, for example, those variables could be sectoral output, factor supply, trade, or prices. However it is more difficult for mathematical programmes to find a strategy based on exogenous parameters, such as tax/subsidy rates and government transfers, in an efficient way across many scenarios.\nTo address this problem, Mohammed Basheer, Victor Nechifor, Alvaro Calzadilla, Julien Harou and others have developed a framework that links GAMS models to multi-objective evolutionary search algorithms (MOEAs) and other AI algorithms. The approach is potentially of interest to other GAMS users.\nThe researchers used GAMS to reliably simulate large systems with many linear or non-linear decision variables and then used an external ‘black box’ solver to solve for the exogenous parameters. The black box solvers of choice were multi-objective evolutionary algorithms (MOEAs), many of which are available as open-source software and can be very effective.\nIn a final twist, the group linked their pipeline to machine learning to help make sense of the Pareto-front (the multi-dimensional efficient solutions) produced by the MOEA.\nTechnical Implementation The MOEA was accessed from a Python open-source library called Platypus . Platypus is a framework that focuses on MOEA algorithms and provides access to them through a high-level Application Programming Interface (API). Several MOEA algorithms can be accessed through Platypus, including NSGA-II, NSGA-III, MOEA/D, IBEA, Epsilon-MOEA, SPEA2, GDE3, OMOPSO, SMPSO, and Epsilon-NSGA-II.\nFigure 1 shows the technical implementation of the integration of GAMS-based models, other Python-based models, and the MOEA algorithm. GAMS enables accessing and modifying models through a Python API . The API enables modifying model parameters, accessing variable values, and solving GAMS models through a Python package.\nFig. 1 Integration of GAMS-based models and MOEA blackbox optimization GAMS model access in Python Exposing a GAMS model to python requires creating a GAMS model instance through the API, which enables running and modifying the model. Creating a model instance in Python was performed through the following steps:\nworkspace = GamsWorkspace(\u0026#34;directory containing the GAMS model\u0026#34;) model = workspace.add_job_from_file(\u0026#34;path to the GAMS model file\u0026#34;) opt = workspace.add_options() opt.all_model_types = \u0026#34;solver name\u0026#34; checkpoint = GamsCheckpoint(workspace = workspace, checkpoint_name = \u0026#34;checkpoint name\u0026#34;) model.run(opt, checkpoint) model_instance = GamsModelInstance( checkpoint = checkpoint, modelinstance_name = \u0026#34;add instance name\u0026#34;) The “model_instance” enables running and modifying the GAMS model. All variables that need to be accessed are declared as follows:\nvar1 = model_instance.sync_db.add_variable( identifier = \u0026#34;variable name in GAMS\u0026#34;, dimension = \u0026#34;integer representing variable dimension\u0026#34;) Similarly, parameters that need to be modified during the simulation or by the MOEA were declared as follows:\npar1 = model_instance.sync_db.add_parameter( identifier = \u0026#34;parameter name in GAMS\u0026#34;, dimension = \u0026#34;integer representing parameter dimension\u0026#34;) The last step in creating the model instance is instantiating the model instance:\nmodel_instance.instantiate( model_definition = \u0026#34;add the model definition\u0026#34;, modifiers = [par1, para2,...], options = opt) Now that the model instance is created, the GAMS model can communicate with other Python-based models. This was performed by modifying values coming from other models to the GAMS model as follows, e.g.:\npar1.add_record().value = \u0026#34;parameter value\u0026#34; Once inputs to GAMS are set, the model can be solved as follows:\nmodel_instance.solve() Also, values of some variables from the GAMS model were extracted and provided to the other models as follows, e.g.:\nvalue_to_provide_to_other_models = model_instance.sync_db[\u0026#34;variable name in GAMS\u0026#34;].find_record() MOEA Wrapper in Python MOEA wrapper is a Python class designed to create the MOEA optimization problem and facilitate communication between the MOEA and the integrated simulation models. As Figure 1 shows, the wrapper collects the values of the objective values at the end of each run of the integrated models and provides the values of the decision variables to the integrated models following each MOEA evaluation. The wrapper is provided with an instance of the integrated models. The instance is of a class that includes the following methods:\nget_variables: a method that provides the names and upper and lower bounds of the MOEA decision variables. get_objectives: a method that provides the names of the MOEA objectives. get_constraints: a method that provides the names of the MOEA constraints. apply_variables: a method that applies values to the decision variables get_objective_values: a method that extracts the values of the MOEA objectives from the integrated models. run: a method for running the integrated models for a single simulation The wrapper is structured as follows:\nclass wrapper(object): def __init__(self, integrated_models, *args, **kwargs): super(wrapper, self).__init__(*args, **kwargs) variables = list(integrated_models.get_variables()) objectives = list(integrated_models.get_objectives()) constraints = list(integrated_models.get_constraints()) self.problem = platypus.Problem( len (variables), len(objectives), len(constraints)) self.problem.function = self.evaluate self.problem.wrapper = self for i, var in enumerate(variables): self.problem.types[i] = platypus.Real(var.lower_bounds, var.upper_bounds) for i, constraint in enumerate(constraints): self.problem.constraints[i] = platypus.Constraint( constraint.operator, value = constraint.limit) def evaluate(self, solution): self.integrated_models.apply_variables(solution) self.integrated_models.run() objectives = [] for objective in integrated_models.get_objective_values(): objectives.append(objective) constraints = [] for constraint in integrated_models.get_constraint_values(): constraints.append(constraint) return objectives, constraints Finally, to access and run the MOEA optimization:\nmoea_wrapper = wrapper(integrated_models=\u0026#34;instance of the integrated models\u0026#34;) An example run using the NSGAIII algorithm is performed as follows for a maximum number of iterations equals n:\nmoea_algorithm = platypus.NSGAIII(moea_wrapper.problem) moea_algorithm.run(n) Please note that all the code blocks above are for demonstration and would need to be largely adapted to the use case and overall code design.\nMachine learning applied to the MOEA results To help understand which decision variables have a high influence on the objective values, machine learning was applied to the outputs of the MOEA. The MOEA optimization process produces decision variable values and the corresponding objective values for each iteration between the MOEA algorithm and the integrated simulators. These values are collected and used to train a machine learning model to derive the level of influence of each of the decision variables on the optimization objectives. We used the Random Forest Regression Machine Learning algorithm through the scikit-learn library in Python . In this case, each objective value is used as a target to train a Machine Learning model, while the decision values were used as features. First, the data were split into testing and training data using the “train_test_split” method of scikit-learn as follows:\nX_train, X_test, Y_train, Y_test = train_test_split( features, targets, test_size = \u0026#34;splitting proportion of the data 0-1\u0026#34;) The next step is to train a random forest model based on the training data, as follows:\nmodel = RandomForestRegressor( max_depth=\u0026#34;max tree depth\u0026#34;, n_estimators = \u0026#34;number of trees\u0026#34;) model.fit(features,target) importances = model.feature_importances It is important to also examine how the model performs with the testing data compared to training data to avoid overfitting or underfitting the data. For example, the r2 can be calculated through the \u0026ldquo;r2_score\u0026rdquo; method of scikit-learn as follows:\nY_predicted_train = model.predict(X_train) Y_predicted_test = model.predict(X_test) R2_train = r2_score(y_true=Y_train,y_pred=Y_predicted_train) R2_test = r2_score(y_true=Y_test,y_pred=Y_predicted_test) Example Uses It is best to explain the benefits of the approach by example. Below we describe briefly three different application areas where GAMS-AI linkages have been successfully applied.\nResearch Area 1: Water Resource Systems Estimating the economic value of water can help planners manage it better. This application considered California’s largest inter-tied water resource system - the California Central Valley and surrounding mountain ranges, comprised of 30 reservoirs, 22 aquifers, and 51 urban and agricultural water demand sites. Water there has multiple economic uses: farmers can grow plants, industry can produce, hydropower plants can generate electricity, etc. At the same time, there are costs associated with operating the system: groundwater and surface water transfers require using energy-intensive pumps. Costs and benefits tend to be non-linearly dependent on the water levels in the reservoirs of the system. In addition to allocating and distributing water, planners have to manage for uncertainty. Future precipitation and temperature dependent evaporation are unknown yet they determine how much stored water will be needed in later years. Given these uncertainties, water managers have to decide on the release for this year’s uses vs. how much water to save.\nThis study estimates \u0026ldquo;carry over storage value functions\u0026rdquo;, i.e curves quantifying stored water’s value. In the work published open-access in Water Resources Research , the authors link multiple scenario simulations done with GAMS models to a multi-objective evolutionary algorithm (MOEA) to find the valuation curves of each surface reservoir’s storage that jointly maximize long-term economic benefits of water use in California. Knowing the regional economic value of leaving water in a large dam for subsequent potentially dry years helps water managers and planners make better infrastructure operation decisions.\nResearch Area 2: Economic Modeling Countries around the world plan policies aiming to achieve sustainable development goals (SDGs) and improve national-level performance on social, economic, and environmental dimensions. However, in reality, tradeoffs might exist between different goals. For instance, reducing CO2 emissions can often have tradeoffs with poverty reduction and economic growth. Designing national economic policies aimed at achieving multiple SDG targets at once is a complex problem due to the tradeoffs involved. In a paper published in Nature Communications , the authors show that artificial intelligence (AI)-driven search and machine learning can be useful in navigating these tradeoffs. This application of GAMS and AI combined an economy-wide model (developed in GAMS) with a multi-objective evolutionary algorithm to search for exogenous national-level economic interventions that balance the performance across multiple SGD targets at once and identify the lowest possible tradeoffs. This is achieved through thousands of iterations between the economy-wide model and the search algorithm, as explained earlier in the technical implementation. The outcome of each iteration is a set of 47 exogenous economic interventions (changes to direct, production, and sales taxes/subsidies and household transfers), and the associated performance is measured by SGD indicators (i.e., the objectives). The SDG objectives included in the study are maximizing GDP growth and household income, minimizing income inequalities, and minimizing CO2 emissions. This framing is applied to the case of the Egyptian economy. Results of the study show that achieving sustainable development across multiple performance objectives requires a combination of exogenous interventions that are not necessarily intuitive due to the non-linear nature of economies. It was found that a compromise solution for the Egyptian case that improves performance across multiple objectives is achievable through tweaks to 47 individual exogenous parameters. Machine learning was used to understand how different economic interventions considered (i.e., the 47 exogenous decision variables) in the search process can influence sustainability performance. The Machine Learning analysis showed that among the 47 decision variables, changing the tax/subsidy on petroleum sales is the most effective policy instrument for achieving multiple SDGs, followed by producer tax/subsidy on private services, and then direct government transfers to households. The open-access published paper provides more details on the study and the results.\nResearch Area 3: Integrated economic and water resources modeling The Nile is one of the longest rivers in the world and is geographically shared between 11 countries. Although the Nile has a large basin area that covers around 10% of the African continent, its streamflow is lower than other rivers that have a similar or smaller basin area, such as the Congo, the Niger, and the Mississippi. The limited water resources of the Nile River and the growing demand for water resources in the Nile riparian countries created political tensions. An example of such tensions is the ongoing disagreements between Ethiopia, Sudan, and Egypt on the construction and operation of the Grand Ethiopian Renaissance Dam. The dam is currently under construction in Ethiopia and, when completed, is expected to double Ethiopia’s electricity generation and improve electricity access in one of the poorest regions globally. However, neighboring Sudan and Egypt are worried that the dam would change the quantity and quality of river flow to them and negatively affect their use of the Nile. This represents a problem with tradeoffs between some objectives of the countries. A recent study published in Nature Climate Change used the combined simulation and AI framework described in Figure 1 to navigate the tradeoffs and discover solutions that can balance the performance between the three countries.\nThe study integrates economy-wide models of Ethiopia, Sudan, and Egypt (developed in GAMS), a water resources system model of the Nile (developed using Pywr in Python), and AI search and Machine Learning Algorithms. This combination enables searching for ways to manage the Grand Ethiopian Renaissance Dam (i.e., decision variables) considering economy-wide objectives (e.g., GDP of each of the three countries) and engineering objectives (e.g., hydropower generation and irrigation water supply) simultaneously. Results reveal that, especially with extreme wet or dry projections, implementing cooperative adaptive strategies for managing the Grand Ethiopian Renaissance Dam brings about advantages for Ethiopia, Sudan, and Egypt in terms of both economic and water management aspects. Nonetheless, if the adaptive management plans focus solely on maximizing economic benefits for one country, it leads to negative consequences for at least one of the remaining two countries. The outputs of the search process were then used to train Machine Learning models to understand how different decisions in managing the Grand Ethiopian Renaissance Dam affect the performance objectives of each of the three riparian countries. The Machine Learning analysis shows that the Egyptian GDP is most influenced by water releases during droughts, and the Sudanese and Ethiopian GDPs are more influenced by the hydropower generation targets of the dam. More details about the study can be found through the following link to the open-access paper: https://www.nature.com/articles/s41558-022-01556-6 .\n","excerpt":"GAMS is good at finding decision variable values that provide the optimal solution to a problem for a given set of scenario parameters. It is more difficult to find a strategy that can test combinations of potentially many parameters in an efficient way to identify optimal scenarios.","ref":"/blog/2023/06/supercharging-gams-models-by-linking-to-artificial-intelligence-heuristic-search-and-machine-learning/","title":"Supercharging GAMS models by linking to Artificial intelligence - Heuristic search and machine learning"},{"body":"The INFORMS Business Analytics Conference of this year took place in Aurora, Colorado. Represented by Adam, Logan, Steve, and Bau, the GAMS team attended the event with the aim of meeting other professionals from different companies and gaining knowledge about the latest advancements in mathematical optimization and modeling.\nWe were delighted to have the basketball hoop again and our colleagues and friends visited us at our booth for a little game. Thanks to the constant flow of visitors, we were able to connect with many intriguing people at the conference.\nGAMS is a frequent participant at INFORMS conferences, which presents an opportunity for GAMS users to exchange their experiences and insights and stay up-to-date with the newest trends in optimization and mathematical modeling.\nAdam held a technology tutorial on GAMS ENGINE and GAMS Transfer.\nOur Technical Workshop Model deployment and data wrangling with GAMS Engine and GAMS Transfer Presented by: Adam Christensen\nThe right tools help you deploy your GAMS model and maximize the impact of your decision support application.\nGAMS Engine is a powerful tool for solving GAMS models, either on-prem or in the cloud. Engine acts as a broker between applications or users with GAMS models to solve and the computational resources used for this task. Central to Engine is a modern REST API that provides an interface to a scalable, containerized system of services, providing API, database, queue, and a configurable number of GAMS workers. GAMS Engine is available as a standalone application, or as a Software-As-A-Service solution running on AWS.\nGAMS Transfer is an API (available in Python, Matlab, and soon R) that makes moving data between GAMS and your computational environment fast and easy. By leveraging open source data science tools such as Pandas/Numpy, GAMS Transfer is able to take advantage of a suite of useful (and platform independent) I/O tools to deposit data into GDX or withdraw GDX results to a number of data endpoints (i.e., visualizations, databases, etc.).\n","excerpt":"This years INFORMS Business Analytics Conference was held in Aurora Colorado. The GAMS team, represented by Adam, Logan, Steve and Bau, went to Aurora to meet with colleagues from other companies and to have interesting talks and presentations.","ref":"/blog/2023/04/gams-at-the-informs-in-colorado/","title":"GAMS at the INFORMS in Colorado"},{"body":"The World Bank is a global institution that provides financial and technical assistance to developing countries around the world. One of its primary areas of focus is the development of infrastructure, including electricity generation and distribution. To support this work, the World Bank has developed the Electricity Planning Model (EPM), a sophisticated tool for modeling the energy sector in developing countries.\nRecently, the World Bank partnered with GAMS to improve the EPM. The project involved several key aspects, including model improvement, model deployment, and training and workshops.\nModel Improvement: The first phase of the project involved a comprehensive review of the EPM. GAMS worked closely with the World Bank team to understand the model’s strengths and weaknesses and identify areas for improvement. Based on this analysis, GAMS revised the EPM and updated it with the latest GAMS features, resulting in significant performance improvements:\nTime Improvement Factor GAMS Time 26.5x Model Generation 10.0x Reporting Time 36.1x Model Deployment: The second phase of the project focused on model deployment. GAMS worked with the World Bank team to make the EPM suitable for cloud environments. This involved the development of a GAMS MIRO application for the EPM which was customized to meet the exact needs of the World Bank, including custom renderers and input widgets, extensive error checks, powerful pivot tables, and other features to improve usability. The MIRO app runs as an interactive server application accessible to all members of the modeling group. The World Bank team can now utilize the app to work with the EPM and run simulations on various scenarios, enabling them to make informed decisions on energy sector development. Fig 1: EPM GAMS MIRO ApplicationInput screen and Dashboard of the EPM GAMS MIRO application\nTraining \u0026amp; Workshops: The third aspect of the project involved customized workshops on topics according to the World Bank’s needs, aimed at enhancing the team’s understanding of the new features and optimizing their use of the EPM. Among other things, GAMS provided training on how to use the updated model, as well as how to deploy and scale it in the cloud. This training was crucial in ensuring that the World Bank team can fully leverage the new model setup.\nAnother result of this collaboration: The World Bank now uses GAMS Engine Saas for their daily model calculations, allowing them to switch between different instance sizes easily. This allows the Bank to meet its energy modeling needs efficiently and cost-effectively, further enabling it to provide reliable and affordable electricity to all people worldwide.\n","excerpt":"\u003cp\u003eThe \u003ca href=\"https://www.worldbank.org/en/home\" target=\"_blank\"\u003eWorld Bank\u003c/a\u003e\n is a global institution that provides financial and technical assistance to developing countries around the world.\nOne of its primary areas of focus is the development of infrastructure, including electricity generation and distribution.\nTo support this work, the World Bank has developed the \u003ca href=\"https://github.com/worldbank/EPM\" target=\"_blank\"\u003eElectricity Planning Model\u003c/a\u003e\n (EPM),\na sophisticated tool for modeling the energy sector in developing countries.\u003c/p\u003e","ref":"/consulting/woldbank/","title":"Power Systems Planning at the World Bank"},{"body":"Mathematical optimization is a powerful tool that can be applied in various industries to improve efficiency, reduce costs, and enhance overall performance. One recent example of the successful use of optimization techniques can be seen in the brewing industry. A team of researchers from the University of Hamburg around Prof Knut Haase has developed a system that supports the Swiss Feldschlösschen brewery in producing 220 finished products from 100 semifinished products.\nThe brewing industry has faced several challenges in recent years, with customer demand changing rapidly. For example, there is now a growing demand for alcohol-free beer, and consumers are seeking new and diverse tastes. These changing trends have made planning the brewing process more complex and challenging for breweries.\nTo address these challenges, the team from the University of Hamburg developed an optimization system that supports breweries in planning their production processes. The system is designed to help Feldschlösschen produce a large variety of beers while ensuring efficient use of resources and minimizing costs.\nThe system is implemented using GAMS / CPLEX and is fully integrated into the company\u0026rsquo;s ERP system. With the new optimization system in place, the brewery can significantly reduce the manual planning required for operational, strategic, and tactical planning. The system provides practical production schedules that are guaranteed to meet customer demand while minimizing costs.\nThe full story can be accessed on the publisher website. ","excerpt":"Mathematical optimization can be applied in various industries. One recent example can be seen in the brewing industry. A team of researchers from the University of Hamburg has developed a system that supports the Feldschlösschen brewery in producing 220 finished products from 100 semifinished products.","ref":"/blog/2023/03/optimization-in-the-brewing-industry/","title":"Optimization in the brewing industry"},{"body":"TIMES Cloud Service - Solving TIMES Models in the Cloud 2021-2023 The TIMES Cloud Service, based on the GAMS Engine technology, has been successfully launched in April 2021. It accepts job submissions from various clients such as GAMS Studio or the well established TIMES front end VEDA , and provides a user-friendly web interface for job submission and administration. Jobs are placed in a queue and assigned to an available GAMS worker for processing, with results made available to the user. Fig 1: TIMES Cloud Service - Roles and componentsThe TIMES Cloud service provides a way to solve TIMES models on modern scalable cloud compute infrastructure and is running on the AWS Elastic Cloud. The service can be accessed from various clientssuch as Veda, Veda-Online, the TIMES MIRO App, GAMS Studio and the GAMS Engine web user interface.\nIn December 2021, the TIMES Cloud Service was migrated from a dedicated to the AWS Elastic Cloud to provide users with powerful computing resources (up to 2TB of RAM) and practically unlimited parallel jobs. The service is centrally covered by ETSAP, lowering the upfront costs and increasing the accessibility of the TIMES modelling tools. The TIMES Cloud Service enhances the openness of the TIMES model generator and associated software, making it more accessible to a wider audience.\nTIMES MIRO App - Deploying TIMES as Web Application 2021 The objective of this project was to deploy a complex energy system optimization model developed in GAMS (The General Algebraic Modeling System) as a web application. The TIMES model is widely used for energy system analysis and optimization. The challenge was to provide easy access to the model for policy-makers, researchers, and other stakeholders without requiring them to have expertise in GAMS or the TIMES model.\nWe decided to create a web application for the TIMES model based on GAMS MIRO . MIRO provides a simple, yet powerful, framework for deploying GAMS models as web applications. It takes care of all the technical details involved in deploying a GAMS model on the web, allowing the user to focus on the model\u0026rsquo;s functionality and user interface.\nThe project was successful in achieving the goal of creating a user-friendly web application for the TIMES model. The app allows users to upload input data, select model parameters, visualize the results, and compare scenarios. Fig 2: Scenario Compare ModeComparison of four scenarios in MIRO\u0026rsquo;s Pivot View\nThanks to the flexibility of MIRO, the TIMES MIRO app can be used on a local computer or with a GAMS Engine backend. Heavy computations can be easily outsourced to the cloud. The app has been used by researchers and policymakers to analyze different energy scenarios and to make informed decisions. The app has received positive feedback for its user-friendly interface, accuracy, and scalability. The source code of the app was published on GitHub under the MIT open source license.\n","excerpt":"\u003ch2 id=\"times-cloud-service---solving-times-models-in-the-cloud\"\u003eTIMES Cloud Service - Solving TIMES Models in the Cloud\u003c/h2\u003e\n\u003ch3 class=\"text-muted\" id=\"2021-2023\"\u003e2021-2023\u003c/h3\u003e\n\u003cp\u003eThe \u003ca href=\"https://www.iea-etsap.org/index.php/etsap-tools/model-generators/times\" target=\"_blank\"\u003eTIMES\u003c/a\u003e\n Cloud Service, based on the \u003ca href=\"/sales/engine_facts/\" target=\"_blank\"\u003eGAMS Engine\u003c/a\u003e\n technology, has been successfully launched in April 2021. It accepts job submissions from various clients such as GAMS Studio or the well established TIMES front end \u003ca href=\"https://iea-etsap.org/index.php/etsap-tools/data-handling-shells/veda\" target=\"_blank\"\u003eVEDA\u003c/a\u003e\n, and provides a user-friendly web interface for job submission and administration. Jobs are placed in a queue and assigned to an available GAMS worker for processing, with results made available to the user.\n\u003cfigure\u003e\u003cimg src=\"/consulting/etsap/times_cloud_service_roles_components.png\"\n alt=\"The TIMES Cloud service provides a way to solve TIMES models on modern scalable cloud compute infrastructure and is running on the AWS Elastic Cloud. The service can be accessed from various clientssuch as Veda, Veda-Online, the TIMES MIRO App, GAMS Studio and the GAMS Engine web user interface.\" width=\"100%\"\u003e\u003cfigcaption\u003e\n \u003ch4\u003eFig 1: TIMES Cloud Service - Roles and components\u003c/h4\u003e\u003cp\u003eThe TIMES Cloud service provides a way to solve TIMES models on modern scalable cloud compute infrastructure and is running on the AWS Elastic Cloud. The service can be accessed from various clientssuch as Veda, Veda-Online, the TIMES MIRO App, GAMS Studio and the GAMS Engine web user interface.\u003c/p\u003e","ref":"/consulting/etsap/","title":"Deployment of the TIMES Model Generator"},{"body":" Model Development Teaming up with your industry-specific experts, our experienced PhD.-level optimization specialists craft innovative solutions that precisely meet the requirements of your business. Our tailor-made optimization solutions deliver a high business impact and improve your business's decision-making abilities. We start with analyzing the decision problem in collaborative meetings and interviews. Focusing on the long-term business goals of your company, we define objectives and business rules that describe your decision problem. Next, we develop your tailor-made optimization model using state-of-the-art modeling and solver technology. Working in close partnership with your business experts, we thoroughly validate the results and accompany you during the launch of your new decision support tool with customized training and workshops. Benefits: Tailor-made decision support, developed by our team of optimization experts No, or only limited knowledge of mathematical modeling required on the part of your team Model Improvement Are you experiencing issues with performance, maintainability, or usability of your GAMS model? Our PhD-level optimization specialists can help you analyze and improve your existing model, ensuring that it meets the specific requirements of your business while providing optimal performance. Our experts are experienced in dealing with highly complex models that have grown over decades in such a way that maintainability becomes a problem. Use our model improvement services - from small adjustments to complete overhaul - to make your GAMS model fast and future-proof. Benefit from our expertise and experience and collaborate with our team of experts to select the best-performing solvers for your problem and tune them to maximize the performance even further refactor your code and improve performance, readability, and maintainability extend your existing model to incorporate new objectives or additional business rules Benefits: Better performance, better maintainability, better user experience. Model Deployment Do you have a GAMS model that is complicated for users to operate due to the lack of a graphical user interface (GUI)? Or would you like to outsource the execution of your GAMS jobs to centralized compute resources? With GAMS MIRO and GAMS Engine , we provide the tools to efficiently deploy your GAMS-based optimization applications for your users! Share your needs and requirements for a graphical user interface with our team of experts who will design and develop a web application based on GAMS MIRO as a front end to your model. We can cover everything from simple table views to highly customized dashboards with tailor-made charts to visualize your model inputs and outputs. Once your model has been turned into a MIRO application, our experts will be happy to help you select and implement a suitable deployment option, from 100% local to 100% in the cloud. Whether you use MIRO or operate your model without a GUI, with GAMS Engine you can run your models on centralized compute resources, either on-premise or in the cloud. Let us know your requirements regarding the desired type of usage, and our experts will work with you to make your model ready for GAMS Engine. From implementing customized clients to selecting appropriate hardware resources to setting up MIRO server solutions with our cloud-based GAMS Engine SaaS back-end, we can help you overcome all challenges. Training and Workshops Improve your teams' skills with our training and workshops on optimization using the GAMS tool stack. Our optimization specialists will provide on-site or online training customized to your specific learning requirements. Popular training includes: GAMS data exchange Best practices modeling with GAMS Parallelization Performance Profiling and Performance Improvements GAMS APIs GAMS and Python Deployment of GAMS Models ","excerpt":"\u003cdiv class=\"accordion\" id=\"accordionExample\"\u003e\n \u003cdiv class=\"card\"\u003e\n \u003cdiv class=\"card-header\" id=\"headingOne\"\u003e\n \u003ch2 class=\"my-0 accordion-header\"\u003e\n \u003cbutton class=\"btn btn-link btn-block text-left\" type=\"button\" data-toggle=\"collapse\" data-target=\"#collapseOne\" aria-expanded=\"true\" aria-controls=\"collapseOne\"\u003e\n \u003ci class='mr-3' data-feather='chevron-right'\u003e\u003c/i\u003e\n \u003cstrong\u003e\n Model Development\n \u003c/strong\u003e\n \u003c/button\u003e\n \u003c/h2\u003e\n \u003c/div\u003e\n \u003cdiv id=\"collapseOne\" class=\"collapse show\" aria-labelledby=\"headingOne\" data-parent=\"#accordionExample\"\u003e\n \u003cdiv class=\"card-body\"\u003e\n\n\u003cdiv class=\"card mb-3 border-0\"\u003e\n \u003cdiv class=\"row no-gutters\"\u003e\n \u003cdiv class=\"col-lg-5\"\u003e\n \u003cdiv class=\"card-body\"\u003e\n \u003cimg src=\"./development.jpg\" class=\"card-img\" alt=\"...\"\u003e\n \u003c/div\u003e\n \u003c/div\u003e\n \u003cdiv class=\"col-lg-7\"\u003e\n \u003cdiv class=\"card-body\"\u003e\n\n \u003cp class=\"card-text\"\u003e Teaming up with your industry-specific experts, our experienced PhD.-level optimization specialists craft innovative solutions that precisely meet the requirements of your business. Our tailor-made optimization solutions deliver a high business impact and improve your business's decision-making abilities. \u003c/p\u003e","ref":"/consulting/","title":"Consulting Services"},{"body":"We were so happy about our Christmas party this year. Almost as a GAMS tradition we met again in early November at HENKs Kuechen.bar in Braunschweig. Also this year it was possible to welcome our colleagues from the US office Lleny, Steve and Adam in Braunschweig. Even though not every colleague was able to join the celebration, we had a great time and the team is growing year by year. With cooking together, good conversations and a fading evening at the bar with good drinks, it was again a great evening for all and we are looking forward to continue this GAMS tradition.\nHave a look at some pictures from the evening.\nHappy Christmas from the GAMS teams!\n\u0026times; Previous Next Close ","excerpt":"This years GAMS christmas party was great. The whole team was happy to attend and it was even possible to have some colleagues from the US at our party. It has become a kind of tradition to go to Henks Küchen.bar in Braunschweig to cook together.","ref":"/blog/2022/11/the-2022-gams-christmas-celebration/","title":"The 2022 GAMS Christmas Celebration"},{"body":"The field of mathematical solvers has seen tremendous improvements in the last decades. LP / MIP solvers, in particular, are now heavily dominated by companies that specialize in the continuous improvement of the algorithms. These commercially driven improvements are important for customers, who in return enjoy ever increasing performance. In the age of big data this is vital for many industries, where millions of dollars can be saved by using performant solver technology.\nIn contrast to the industry-dominated LP / MIP field, MINLP solver technology is much more attractive to academic researchers who can still build successful university careers around this field. The competition between different groups is friendly, and many of the advances are published in the academic literature, driving the whole field forward.\nGAMS has also had an interest in MINLP technology early on. In fact, GAMS / DICOPT was the very first modeling-language / solver-combination able to solve MINLPs. Our team also established a comprehensive library of MINLP models1, ranging from simple text-book examples all the way to large industry models with many thousands of variables. Along with this, we released a conversion tool, which enabled researchers to convert library models from GAMS to other languages such as AMPL, LINGO, LGO and MINOPT. The library used to be part of \u0026ldquo;GAMS-World\u0026rdquo;, but has now taken on a life of its own at https://minlplib.org/ and continues to be a valuable resource to people in the field.\nThis fall we have the opportunity to celebrate two occasions related to some of the most well known MINLP solvers coming out of the academic arena: BARON (now maintained by The Optimization Firm ), and SCIP . The contributions of BARON\u0026rsquo;s inventor Nick Sahinidis to advancements in the field have been recognized this October by his induction to the National Academy of Engineering Class of 2022, an honor he shares with 132 awardees from other fields important to society, such as vaccine development, renewable energy technologies, or space flight. The SCIP team on the other hand celebrated the solver\u0026rsquo;s 20th anniversary with the \u0026ldquo;Let\u0026rsquo;s SCIP it!\u0026rdquo; workshop in early November. With the event they have changed the SCIP license terms to the Apache 2.0 license, which makes the solver fully open source.\nMINLP Solver Performance Gains As anyone who works in the field would know, benchmarks are a great way to demonstrate the progress you have made. Since MINLP is a very general problem class, finding a test set that properly reflects the majority of actual problems in the wild is very challenging. Therefore, results of any benchmark need to be taken with a grain of salt. Nevertheless, benchmark sets are often well suited to track performance progress of solvers over time.\nBARON performance development over time BARON started out with a couple of thousand lines of GAMS code in the early 1990s, to 250,000 lines of high performance C and Fortran code. Over the last approximately 20 years, BARON\u0026rsquo;s performance on a set of 87 MINLPLib instances has improved by about 10x in speed, and 3x in the number of problems solvable. The figure below by Nick Sahinidis summarizes the improvements over time.\nFig 1. Development of BARON performance over time. The mean speed increase in the time between 2003 and 2022 is 10x, while the number of solvable problems increased 3-fold. _Figure courtesy of Prof Nick Sahinidis, The Optimization Firm \u0026amp; Georgia Institute of Technology.\nSCIP performance development over time Similarly to the BARON team, the SCIP people established a performance timeline for their code. For this, the code was recompiled for all SCIP versions able to solve MINLPs, from v3.0.2 (Oct 2013) to v8.0.1 (Jun 2022), using the same versions of CPLEX and IPOPT as LP and NLP subsolvers, respectively. This allows isolating the development of the SCIP code from the subsolver code (which of course has also been improved in those years).\nThe results of benchmarking runs were presented by Marc Pfetsch, TU Darmstadt, in the presentation \u0026ldquo;SCIP: Past, Present, Future\u0026rdquo; during the anniversary meeting (see also https://scipopt.org/20years/ ). 183 MINLP model instances from a benchmark set from MINLPLib were tested. On the same hardware, the number of solvable models increased from 54 with SCIP 3.0.2, to 108 with SCIP 8.0.1, and at the same time the mean solve time was reduced 3.1-fold.\nFig 2. Historical performance increase of SCIP. (top) Geometric mean of model runtimes shows a 3.1 increase in performance. (bottom) At the same time, the number of solvable instances increased from 54 to 108. Figure courtesy of Prof Marc Pfetsch, TU Darmstadt.\nImplications for algebraic modeling languages - or why solver independence is so valuable Apart from the fact that algebraic modeling languages (AMLs) such as GAMS enable domain experts to formulate their problems in a straightforward manner and pass them to a solver, the performance gains of both BARON and SCIP are additional, important reasons for why AMLs are so valuable for organizations that stay in the optimization business for a long time. The solver independence allows them to take advantage of all the performance gains made over the years, without having to change their model code. Also, when older solvers are not maintained by their developers any longer, and new solvers arrive on the scene (e.g. Octeract ), modelers can easily switch between those solvers, and the time invested in model development is protected. With ANTIGONE, BARON, Lindo, Octeract, and SCIP, five global MINLP solvers are currently distributed with GAMS, and we hope to see FICO Xpress joining the gang next year.\nHere at GAMS we thoroughly enjoy being affiliated with the friendly community around MINLP solvers, and welcome the open competition typical for academia, which ultimately leads to better algorithms and more solvable problems.\nMichael R. Bussieck, Arne Stolbjerg Drud, and Alexander Meeraus, “MINLPLib—A Collection of Test Models for Mixed-Integer Nonlinear Programming,” INFORMS Journal on Computing 15, no. 1 (February 2003): 114–19, https://doi.org/10.1287/ijoc.15.1.114.15159 .\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","excerpt":"MINLP solvers typically have a rooting in academia. This article highlights the historic performance gains of two of them, BARON and SCIP.","ref":"/blog/2022/11/progress-in-minlp-solver-technology/","title":"Progress in MINLP Solver Technology"},{"body":"This year\u0026rsquo;s INFORMS Annual Meeting in Indianapolis was a great experience. Logan went with Steve and Atharv for GAMS to the conference to have a technical talk about GAMS Engine, GAMS Connect and GAMS Transfer. Besides interesting talks and meetings with old friends, we were very happy with the GAMS sponsored basketball hoop. There was fierce competition every day between all the attendees for the prized GAMS leatherman multi-tool. Through this mini game, we had a steady stream of visitors, which helped us network with new colleagues.\nWe are very much looking forward to next year and the upcoming INFORMS conferences.\nAtharv and Steve had the technical tutorial on Tuesday. Below you will find the abstract of the talk.\nOur Technical Tutorial Turning Models Into Applications– GAMS Engine, GAMS Connect, and GAMS Transfer Presented by: Dr. Atharv Bhosekar \u0026amp; Dr. Steven Dirkse\nThe right tools help you deploy your GAMS model and maximize the impact of your decision support application. If your model requires significant computational resources to solve, you may benefit from GAMS Engine, a powerful tool for solving GAMS models either on-prem or in the cloud. Engine acts as a broker between applications or users with GAMS models to solve and the computational resources used for this task. Central to Engine is a modern REST API that provides an interface to a scalable containerized system of services, providing API, database, queue, and a configurable number of GAMS workers. If you are working with data stored in different formats, or you are working with an environment such as Python, Matlab, and R, you will benefit from the GAMS Connect framework and GAMS Transfer API. GAMS Connect provides unified and platform-independent data exchange between different formats (CSV and Excel). GAMS Transfer API (available in Python, Matlab, and R) makes moving data between GAMS and your computational environment fast and easy.\n","excerpt":"\u003cp\u003eThis year\u0026rsquo;s INFORMS Annual Meeting in Indianapolis was a great experience. Logan went with Steve and Atharv for GAMS to the conference to have a technical talk about GAMS Engine, GAMS Connect and GAMS Transfer.\nBesides interesting talks and meetings with old friends, we were very happy with the GAMS sponsored basketball hoop. There was fierce competition every day between all the attendees for the prized GAMS leatherman multi-tool. Through this mini game, we had a steady stream of visitors, which helped us network with new colleagues.\u003c/p\u003e","ref":"/blog/2022/10/informs-annual-meeting-in-indianapolis/","title":"INFORMS Annual Meeting in Indianapolis"},{"body":"","excerpt":"","ref":"/authors/lrandolph/","title":"Logan Randolph"},{"body":"A clear advantage to using a system like GAMS is the large and diverse set of solvers included with the system: a real solver zoo! But this raises the question: which solver(s) should I use? This overview is aimed at people just starting out with modeling in GAMS and should provide a basic understanding of how math programming solvers work, their similarities and differences, and how to choose one or more solvers for a particular project. N.B.: this overview is intended to be USEFUL, not exhaustive, definitive, or even precisely correct in every detail.\nSolvers expect input in a format that is typically not easily readable by humans. Algebraic modeling languages such as GAMS facilitate the process of translating human readable models into a format the solvers can understand. GAMS ships with a wide range of solvers, both free and commercial, with each one of them typically focusing on one or more problem types, which can broadly be categorized into either linear (LP), or non-linear (NLP), and mixed integer extensions of the two (MILP and MINLP).\nIn the time since solvers first appeared on PC platforms, there have been tremendous advances in the underlying algorithms, which now enable routine solving of problems that were intractable not that long ago. For example, in the early 1990s, a state of the art LP solver was able to solve models with several tens of thousands of variables on the hardware available at the time 1. Now, models with millions of variables can be solved quickly. While some of this is certainly attributable to increases in CPU speed and better memory, most of the speedup is due to the improvements of the algorithms. Even though the underlying ideas are often the same, each solver vendor has their own additional bag of tricks and trade secrets to improve performance, such as efficient presolve processes that can substantially decrease the amount of computation to be performed by the main solver algorithm.\nEach solver therefore has its particular strengths and weaknesses when solving specific problem instances. One of the first takeaways here is that it is often not possible to predict the performance of a particular solver, given a particular problem instance. Therefore, the best approach is often to just try a few solvers on a representative sample of models, and then pick the best performing one. Luckily, the separation of model formulation and model solving provided by algebraic modeling languages such as GAMS makes this particularly easy. Switching a solver is as easy as changing a single line of model code.\nFig. 1: Overview of the solvers included with GAMS. LP: Linear Program, QCP: Quadratically Contrained Program, NLP: Non-linear Program, MCP: Mixed Complementarity Program. Solvers that lie on the boundaries of two problem types are well suited to solve both problem types. Please note that this figure does not accurately reflect every solvers capabilites. Instead, it is meant to give a quick anser to the question: Which solver should I try first, given a problem class? This figure has been updated on Feb 14, 2024: SCIP and SoPLEX have been placed under an open source license in November 2023, COPT is able to handle MIQCPs since GAMS 42, and Octeract has been removed with GAMS 46. Solver Overview Linear Programming In linear programming (LP), all model constraints and the objective function are linear equations. Many practical problems can be solved using LP techniques, such as in planning, production or transportation problems.\nIt was George Dantzig, who in the 1950s pushed forward the development of LP codes using the simplex algorithm. These early codes were used in the following decades e.g. in the oil industry and the military on mainframe compute hardware. The advent of the IBM PC in 1980 eventually democratized the use of these algorithms and enabled more users to take advantage of mathematical optimization. Linear programming became widely used.\nEarly commercially successful LP solvers available for PCs in the 1980s / 1990s were XPRESS and CPLEX. XPRESS was originally developed in 1983 by DASH Optimization, run by Bob Daniel and Robert Ashford (who is now president of Optimization Direct, the developer of ODH-CPLEX). The software was sold to FICO in 2008, and is still one of the most successful solvers. CPLEX was started by Robert Bixby (now at Gurobi) in 1987 and is now owned by IBM. Throughout the years, both XPRESS and CPLEX have continuously been improved and are among of the most performant LP solvers to this day.\nEspecially during the time period between the late 1980s and the early 2000s, tremendous improvements in LP performance have been made, despite the fact that many believed that LP algorithms had matured to a point where they could not be improved much. In an interesting study using CPLEX, Bixby found that due to the establishment of the dual simplex algorithm, improved linear algebra, and other algorithmic factors, the speedup in LP solver performance was around 3300-fold between 1988 and 2004 just due to the better algorithms, and factoring in concurrent PC hardware improvements, the total speedup was an astonishing 5-million-fold 2. In the following two decades, progress on LP solver algorithms has slowed down, but still there was a continuous improvement of commercial solvers: A recent study by Koch et al3 found that between the early 2000s and 2020, the speedup due to better LP algorithms was on average around 9-fold, and around 180-fold when you factor in the hardware improvements during that time frame.\nAs of the writing of this article, in addition to CPLEX and XPRESS, several high quality commercial LP solvers are available: (1) MOSEK, developed by Erling D. Andersen in Denmark, was released in 1999, (2) GUROBI was released in 2009 and developed by a team of developers who had before been instrumental in the development in CPLEX (Gu, Rothberg, Bixby). (3) COPT by Cardinal Optimization, is the first commercially available LP solver from China. It was released in 2019 and shows impressive benchmark results on par with or better than the long established products. The algorithms used in these modern LP solvers are the primal or dual simplex algorithms, and interior point methods such as the barrier method. All commercial solvers offer several algorithms and can often decide by themselves which algorithm gives the best performance for a given problem. Also, the commercial codes listed above can all handle mixed integer linear programming, as described in the next section.\nIn addition to commercial solvers, some open source alternatives are available. The first, Cbc (\u0026ldquo;COIN-OR Branch and Cut\u0026rdquo;), has been around for many years and is part of each GAMS distribution. Cbc itself is a mixed integer solver (see section below), but it ships with the Clp (\u0026ldquo;Coin-or linear programming\u0026rdquo;) solver and uses that to solve LPs. Another open source LP solver is SoPLEX , developed by the Zuse-Institute, Berlin. SoPLEX is also included in the GAMS distribution. The relatively new solver HiGHS , developed at the University of Edinburgh, shows great performance in benchmarks and appears to be a good alternative for Cbc and SoPLEX.\nMixed Integer Linear Programming Many practical optimization problems require solving mixed integer models, where one or more of the variables is restricted to discrete values. Often these integer variables are used for decision making and can model \u0026ldquo;yes or no\u0026rdquo; decisions (\u0026ldquo;should we build a warehouse here, or not?\u0026rdquo;), or to model an industrial process (\u0026quot;How should we cut stock material to minimize waste? \u0026quot;). The addition of discrete variables typically makes models much more difficult to solve. All commercially available solvers mentioned in the previous section are able to solve mixed integer linear programs (MILPs) and take advantage of their strong LP performance to handle them.\nThe reason for this is that MILPs are most often solved using branching methods such as the \u0026ldquo;branch and cut\u0026rdquo; method, which create a search tree of subproblems: in a first step, in the root node, the MILP is simplified to an LP by allowing the integer variables to take on continuous values (the \u0026ldquo;LP-relaxation\u0026rdquo;). The relaxed model is then solved using a fast LP algorithm. In some lucky cases, the relaxed integer variables take on integer values in the optimal solution and the problem is solved in the first relaxation, but in most cases this is not the case. The model is then branched on one of the integer variables that take on fractional values in the relaxed model. For the two child nodes at each branching point, a lower or upper bound on the branching variable, equal to the next higher or lower integer value is enforced, creating another pair of LPs. These LP subproblems can be solved independently and in parallel, taking advantage of multicore CPUs. The process of branching at potentially many variables creates a search tree that can be quite large. To narrow down on an optimal solution of the original problem, a pruning process is performed. During this step, leaves of the tree are investigated and pruned if the LP solution of the leaf is worse than the current best known solution from some other place of the tree (\u0026ldquo;pruned by bound\u0026rdquo;), if the leaf LP has an infeasible solution (pruned by infeasibility), or if the leaf produces an integer solution (\u0026ldquo;pruned by integrality\u0026rdquo;). If a leaf cannot be pruned, another branching point is added. Once there is no way to branch any further, the decision variables of the leaf with the best solution constitute the optimal solution to the original problem. A nice explanation of the process can be found in this video on the Gurobi Youtube channel.\nAn interesting commercial solver not mentioned in the previous section is ODH-CPLEX by Optimization Direct 4. This solver is dedicated to solving MIPs and is unique in that it makes use of the symbolic names in the model to decompose it into submodels, which can be solved efficiently in parallel.\nIn addition to commercial solvers, some open source alternatives are available, such as the previously mentioned CBC and HiGHS. Another open source MILP solver with good performance is SCIP (\u0026ldquo;Solving Constraint Integer Programs\u0026rdquo;). Just as has been the case with LP solvers, the solver companies have added many tricks to improve MILP performance. Koch et al estimates that algorithmic improvements alone are responsible for a speedup of commercial MIP solvers of approx 50-fold in the last two decades, and factoring in hardware improvements the overall speedup has been around 1000-fold during the same time frame.\nNonlinear Problems Many physical or engineering problems cannot be modeled adequately using linear programming, since many of the equations describing physical phenomena are nonlinear in nature. The same is true for problems in the finance or economics area. These nonlinear problems (NLPs) can be much more challenging to solve than LP or MILP problems. The level of difficulty depends on some key properties of the problem at hand.\nThe most important property determining the solvability of an NLP is convexity. For a convex model, a line joining any two feasible points of the problem is fully contained in the feasible region. This implies that a local solution must be a global solution. (If we know something about curvature, we can talk about solution uniqueness as well). Convexity also suggests that \u0026ldquo;sliding downhill\u0026rdquo; (i.e. moving in a direction that stays feasible and reduces the linearized objective) will eventually reach the optimal solution. Gradient information can conveniently be provided to the solver by GAMS.\nThere are several good solvers that are able to solve nonlinear problems. First, some of the LP / MIP solvers like CPLEX, GUROBI, XPRESS, or COPT can solve a (convex) subclass of NLPs, in which only quadratic nonlinearities occur, so called \u0026ldquo;quadratically constrained problems\u0026rdquo; (QCPs). MOSEK is a solver that can also solve these problems, but is particularly good for solving conic problems. Many quadratically constrained problems can be reformulated as conic problems and solved elegantly this way.\nGoing beyond QCPs to more general NLPs, other solvers are required. The oldest commercially available one is MINOS 5, which solves successive subproblems of the original NLP, in which the nonlinear constraints are replaced by linearized versions. MINOS was developed by Bruce Murtagh and Michael Saunders in the 1980s and is still relevant and competitive today. Another NLP solver named CONOPT, released in 1985 by Arne Drud at ARKI Consulting, uses the general reduced gradient (GRG) algorithm, which is particularly useful for solving very nonlinear problems where other methods have difficulty finding feasible solutions. Yet another NLP solver, SNOPT was developed by Philip Gill, Walter Murray and Michael Saunders at Stanford and UCSD and uses a sequential quadratic programming method to solve large scale NLPs. It is useful for problems where the evaluation of gradients is computationally expensive, since it uses less function evaluations than CONOPT or MINOS. In 2001 KNITRO was developed by Richard Waltz, Jorge Nocedal, Todd Plantenga and Richard Byrd. KNITRO is a very powerful solver for large scale NLPs. An open source alternative is the free solver IPOPT, also available as \u0026ldquo;IPOPTH\u0026rdquo; from GAMS, with included commercial high performance subroutines for the linear solver code required to solve subproblems.\nMixed Integer Nonlinear Problems Just like you turn an LP into a MILP by adding one or more discrete variables to the model, the same can be done with non-linear programs.\nThere are several levels of difficulty here, depending on the order of non-linearity. If you turn a quadratically constrained problem into a mixed integer quadratically constrained problem (MIQCP), often the same solvers that can handle MILPs and QCPs can also handle these problems: CPLEX, GUROBI, XPRESS, MOSEK and KNITRO are good solvers to try first for MIQCPs.\nWhen you add integer variables to a general nonlinear problem, you create a mixed integer nonlinear program (MINLP), and the mechanisms provided by the solvers from the MILP camp do not suffice. Just like a non-convex NLP, the system can have several local optima, and in larger models it can be very difficult or even impossible to know if an identified solution is in fact a global solution. DICOPT, developed by Duran \u0026amp; Grossmann in the 1980s 6, was the first commercial solver for MINLP problems. This was followed by alphaECP (Westerlund et al) in the 1990s. Good open source alternatives for MINLPs are SCIP, and the relatively new SHOT , developed by Andreas Lundell and Jan Kronqvist, and released in 2020. A webinar recording explaining the underlying principles of this solver and how to use it with GAMS is available on YouTube. Similarly to the progress with MILP algorithms, in the last two decades there have been steady improvements in MINLP solver algorithms, which include convexification of non-convex problems, decomposition techniques and interval methods.\nThere are specialized solvers for finding global solutions of nonconvex problems. Similarly to the MILP solvers explained above, these solvers subdivide the original model into smaller submodels that can be solved with existing algorithms. BARON, developed by Nick Sahinidis at The Optimization Firm, implements a \u0026ldquo;branch and bound\u0026rdquo; type algorithm, explained in Ryoo \u0026amp; Sahinidis (1995)7. BARON has been part of the GAMS distribution since 2001, and is one of the best performing of these global solvers. The second established and high quality global optimization solver is LINDO, which uses \u0026ldquo;branch and cut\u0026rdquo; mechanisms, and can handle non-smooth functions such as \u0026ldquo;abs()\u0026rdquo;, which are often problematic since they cannot be differentiated at all points. Another branch-and-bound global optimization solver is ANTIGONE, developed by Flouders \u0026amp; Misener 8. ANTIGONE exploits special structures to solve convex relaxations of the non-convex original problem, and hands off solving the relaxed subproblems to CPLEX and CONOPT. Another solid choice with good performance for global optimization is the before mentioned SCIP . In addition to the established global optimization solvers listed above, there is a new entrant to the market called Octeract, which is particularly good at making use of parallel processing on modern CPU architectures.\nMixed Complementarity Problems The MCP problem type (prevalent in the field of economics) is structured differently from the problem types in the previous sections. An MCP has no objective, and no constraints in the usual sense. Instead, an MCP is a set of complementarity conditions: for each variable, a matching function complements or is perpendicular to the variable with regard to its bounds. GAMS ships with the dedicated MCP solvers PATH, MILES, and NLPEC. In addition, KNITRO has gained the ability to solve MCP problems in recent releases.\nHow to Choose a Solver While digesting the above paragraphs, you might have asked yourself how on earth you can possibly decide which of the many solvers is the correct one to try? Fortunately, with an algebraic modeling system such as GAMS this process is not as hard as it seems. Here is a suggested workflow for those who just start out with a new project and do not have any GAMS license yet:\nStart with a GAMS demo license https://gams.com/try_gams/ .\nOnce your prototype exceeds the size limitations of the demo license, you can upgrade to a GAMS / Base license, which includes a range of open source solvers. These might be sufficient for your problem.\nOnce you hit the limits on the free solvers (e.g. in terms of speed, robustness, or solution quality), it might be time to upgrade to a high performance commercial solver. We can provide you with time limited evaluation licenses for our commercial solvers, so you can try out different ones and then purchase the one that performs best for your problem.\nThe ability to quickly change solvers with a single line of code is a huge benefit of GAMS, and you should make use of it! There are publicly available benchmarks that favor some solvers over others, but it is really important to keep in mind that your own model is not part of the benchmark and might behave differently.\nRefer to figure 1 to narrow down to the solvers that are recommended for your problem type.\nRobert E. Bixby, “Solving Real-World Linear Programs: A Decade and More of Progress,” Operations Research 50, no. 1 (February 2002): 3–15, https://doi.org/10.1287/opre.50.1.3.17780 \u0026#160;\u0026#x21a9;\u0026#xfe0e;\nRobert E Bixby, “A Brief History of Linear and Mixed-Integer Programming Computation,” Documenta Mathematica, 2012, 16, https://www.math.uni-bielefeld.de/documenta/vol-ismp/25_bixby-robert.pdf \u0026#160;\u0026#x21a9;\u0026#xfe0e;\nThorsten Koch et al., “Progress in Mathematical Programming Solvers from 2001 to 2020,” EURO Journal on Computational Optimization 10 (January 1, 2022): 100031, https://doi.org/10.1016/j.ejco.2022.100031 .\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nhttp://www.optimizationdirect.com/ \u0026#160;\u0026#x21a9;\u0026#xfe0e;\nhttps://web.stanford.edu/group/SOL/algo.html \u0026#160;\u0026#x21a9;\u0026#xfe0e;\nMarco A. Duran and Ignacio E. Grossmann, “An Outer-Approximation Algorithm for a Class of Mixed-Integer Nonlinear Programs,” Mathematical Programming 36, no. 3 (October 1, 1986): 307–39, https://doi.org/10.1007/BF02592064 .\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nH. S. Ryoo and N. V. Sahinidis, “Global Optimization of Nonconvex NLPs and MINLPs with Applications in Process Design,” Computers \u0026amp; Chemical Engineering 19, no. 5 (May 1, 1995): 551–66, https://doi.org/10.1016/0098-1354(94)00097-2 .\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nRuth Misener and Christodoulos A. Floudas, “Global Optimization of Mixed-Integer Quadratically-Constrained Quadratic Programs (MIQCQP) through Piecewise-Linear and Edge-Concave Relaxations,” Mathematical Programming 136, no. 1 (December 2012): 155–82, https://doi.org/10.1007/s10107-012-0555-6 .\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","excerpt":"Math programming solvers are the workhorses called upon by GAMS to produce optimal solutions to your mathematical models. This overview provides assistance when you are faced with selecting the correct solver for your particular model.","ref":"/blog/2022/09/an-overview-of-math-programming-solvers/","title":"An Overview of Math Programming Solvers"},{"body":"","excerpt":"","ref":"/categories/general/","title":"General"},{"body":"","excerpt":"","ref":"/categories/tips/","title":"Tips"},{"body":"This year\u0026rsquo;s annual conference of the Operations Research Society of Germany (GOR e.V.) was held in Karlsruhe from September 6 to September 9. Contributions revolved around the themes of Energy, Information and Mobility. After the long hiatus due to Covid it was great to see our colleagues in person again and participate in fruitful discussions. Thank you to everyone who was involved in organizing this great conference!\nOur three presentations this time:\nYou can find the presentation slides further below.\nScalable Optimization in the Cloud with GAMS and GAMS Engine by Stefan Mann, Frederik Proske and Michael Bussieck The common type of infrastructure to run GAMS models on for most users was and is their laptops or local workstations. This approach works, but has some limitations, which become more apparent as model sizes increase, or the number of model users increases. To overcome the lack of scalability of local computers, many users have begun to implement their own cloud based solutions around GAMS, but this requires a substantial investment in time and resources to implement.\nTo fill this gap we have recently developed a Kubernetes based, scalable and cloud native solution to solve GAMS models, which we call GAMS Engine. Engine scales both horizontally (many parallel instances), and vertically (instances can grow to TB of memory and 100s of CPUs). Engine also includes a job scheduler, quota and permission management.\nIn this presentation we will describe how we implemented the solution and how it is different from more traditional uses of Kubernetes. We will also talk about how GAMS Engine enables customers to fully automate their business optimization processes with little development effort.\nModel Deployment in GAMS by Robin Schuchmann In most cases, using GAMS in the typical fashion - i.e. defining and solving models and evaluating the results within the given interfaces is a sufficient way to deploy optimization models. The underlying field of mathematical optimization, in which the focus is not so much on visualization as on the problem structure itself, has remained a kind of niche market to this day. In the large and very extensive segment of business analytics, however, intuitive deployment and visualization is indispensable. Since these two areas increasingly overlap, the way optimization software is used has also changed significantly. Whereas applications used to be invoked via the command line on a local computer, today many users want to log into an online service and perform their optimization on a centralized compute resource. In this talk, reallife examples are used to show what modern software solutions with GAMS can look like. We present how to turn a GAMS model into an interactive web application in just a few steps. In addition, the generation, organization, and sensitivity analysis of multiple scenarios of an optimization model is addressed. We demonstrate how a model written in GAMS can be deployed with this application on either a local machine or a remote server. While data manipulation and visualization as well as scenario management can be done via the web interface, the model itself is not changed. Therefore, the Operations Research analyst can keep focusing on the optimization problem while end users have a powerful tool to work with the data in a structured way and interactively explore the results.\nResearch \u0026amp; Development Activities at GAMS by Frederik Fiand and Michael Bussieck GAMS has been providing its users with cutting edge optimization technology for several decades. To this end, it is necessary to keep a constant eye on new promising technologies and developments. In recent years, GAMS has therefore increasingly participated in multidisciplinary research projects that bring together specialists from areas such as mathematical optimization, high-performance computing, machine learning, visualization and quantum computing. The application areas are also diverse and address relevant challenges of our time, such as logistic planning problems or energy system analysis, a very active research area in which numerous scientists are developing sustainable solutions for tomorrow’s energy systems. This presentation gives an overview of various recent research projects with GAMS participation. We give insights how projects funded by the Federal Ministry for Economic Affairs and Climate Action such as BEAM-ME, UNSEEN, and ProvideQ, the research campus MODAL funded by the Federal Ministry for Education and Research, or our cooperation with the Energy Technology Systems Analysis Program (ETSAP) of the International Energy Agency (IEA) led to concrete developments giving our project partners access to pioneering technologies through GAMS.\nWe are looking forward to seeing everyone again next year in Hamburg!\nCheck our presentation slides for more information:\nName: Size / byte: OR2022_Model Deployment in GAMS.pdf 4528764 OR2022_R_and_D_Activities_at_GAMS.pdf 1114343 OR2022_Scalable Optimization in the Cloud with GAMS and GAMS Engine.pdf 1515888 ","excerpt":"This year\u0026rsquo;s annual conference of the Operations Research Society of Germany (GOR e.V.) was held in Karlsruhe from September 6 to September 9. Contributions revolved around the themes of Energy, Information and Mobility.","ref":"/blog/2022/09/gams-at-the-or2022-in-karlsruhe/","title":"GAMS at the OR2022 in Karlsruhe"},{"body":"","excerpt":"","ref":"/authors/lwestermann/","title":"Lutz Westermann"},{"body":"GAMS IDE has been part of the GAMS distribution for more than 20 years. The software was originally written in Delphi, and is progressively getting more difficult to maintain: It is getting harder to find seasoned Delphi developers, and the toolchain support is falling behind what is available in other programming languages. Importantly, Delphi support is restricted to Windows, but we want to have a good GAMS development environment in place on all our supported platforms. For these reasons, we decided to start with a blank slate and started development of GAMS Studio in C++ in early 2019, using the cross platform QT framework. GAMS Studio has been the default GAMS development environment since GAMS release 31 and we plan to discontinue GAMS IDE in the near future.\nIn a recent survey we conducted among a random sample of our users, we learned that a significant proportion of Windows customers still use GAMS IDE and have not made the switch to GAMS Studio. There are a variety of reasons for this, such as lack of time to learn a new tool, or bugs present in earlier Studio versions. We have been busy ironing out those bugs, and addressing specific issues and feature requests from our users, so please give Studio a try soon (or have another look if you have tried earlier versions).\nBelow is a brief tour of the important features that make Studio a great environment for developing GAMS models.\nProject handling Projects are a way to keep track of the various input files, data files and output that belong together. When starting GAMS Studio for the first time, it starts with an empty Project Explorer . Opening a new file creates a new project per default. For each Studio project you can set a \u0026quot;base directory\u0026quot;, which is the root of the project's directory structure shown in the project explorer. You can also set a \u0026quot;working directory\u0026quot;, to tell GAMS where to place the files generated during model runs. The two directories can be identical, or you can choose to set them to different values.\nIn GAMS Studio one can move a file between projects by drag and drop in the project explorer and add multiple .gms files to a project. In this case the concept of the \u0026ldquo;main file\u0026rdquo; of projects becomes important: While the classic GAMS IDE always executes the currently active .gms file, GAMS Studio always executes the main file of the currently active project. As an example, you can look at a.lst and press F9 to rerun a.gms (GAMS IDE would not run anything in this case). After adding b.gms to project a, you can also execute a.gms by pressing F9 while looking at b.gms in the editor. If you want to execute b.gms instead, you can make it the main file of the project. For this, right-click in the project explorer on b.gms and select \u0026ldquo;Set as main file\u0026rdquo;. Now the green arrow indicating the main file of a project in the project explorer switches from a.gms to b.gms and pressing F9 will execute b.gms in the project working directory.\nIt should be noted that what is shown in the project explorer of GAMS Studio is only a view into the file system: If you remove a file from a project, it is not deleted from the filesystem. Also, if you drag a file from one project to another in the project explorer, the file is not actually moved into the project folder. Instead, the file stays at its original location within the file system.\nSupport of Engine, MIRO and NEOS server Model deployment is becoming ever more important. To address this need, we have developed GAMS MIRO for generating graphical user interfaces for GAMS models, and GAMS Engine to support running GAMS models on centralized, powerful servers. Both of these new products make it easy to share models with distributed teams and to make use of compute resources in the cloud.\nMIRO Support GAMS Studio fully supports GAMS MIRO development (if you do not know about MIRO yet, please have a look at our product page and the example gallery to learn more). Once you have annotated your GAMS model with MIRO language features, you can launch the MIRO configuration mode directly from Studio to configure the graphical user interface of your MIRO application. Once you are done, you can start the application, and also use the Assembly and Deploy feature to create a self contained MIRO application that you can share with others or deploy with MIRO server.\nEngine Support If you want to run your models on GAMS Engine , it is as simple as selecting the menu item, and then entering the URL, and your username and password for your Engine installation or for your Engine SaaS account. Studio will take care of uploading the model code to Engine, queuing it to be solved, and downloading the results when the job is finished. The whole process is totally transparent and feels the same as solving a model locally on your own computer, but with the benefit of having access to much more powerful compute resources. For our academic users, we also include the option to run your model on NEOS, which is a free service provided by the university of Wisconsin and provides access to a range of different commercial solvers. To learn more about this, have a look at this blog article .\nGDX viewer GDX files are the native GAMS data exchange format, which contain sets, variables, equations and all the other bits that make up the input or output of a GAMS model. With the Studio GDX Viewer you can get a quick overview of all these components in a list or in a tabular view. In the case of multi-dimensional data, you can rearrange the tabular view by simply dragging a column to another position. You can also filter and sort, and quickly copy data to Excel or other spreadsheet software by right clicking in the table, and copying data in comma- or tab-separated format. The GDX viewer is fast, which makes analysing bigger datasets a breeze.\nIn addition to the GDX viewer, there also is a GDX Diff utility , available from the project explorer (select two GDX files, right click and select \u0026ldquo;Open in GDX Diff\u0026rdquo;), or from the \u0026ldquo;Tools\u0026rdquo; menu. This tool allows you to comcpare the content of two GDX files, e.g. to compare the results of two model runs with different input data.\nReference file viewer When your models grow in size and you start splitting things up into multiple files, it can become difficult to keep track of where certain elements are defined and referenced. GAMS Studio offers the reference file viewer to help you keep track of where things come from, and where they are used. With the reference file viewer you can drill down into all the sets, parameters, equations and more of your model and get exact line numbers where they are used. Clicking on a location then takes you directly to the correct place in your model code. You can find more information about the reference file viewer in the GAMS documentation .\nEditor for default GAMS configuration Default GAMS configuration options can be set in the file gamsconfig.yaml. This can be edited by hand, but Studio offers a much more convenient configuration editor, which allows you to set individual defaults and gives a brief explanation for each option and the available values for the different options.\nOther Modern editor features GAMS IDE is 20 years old, and you can definitely tell that when looking at all the modern editor features that you get with Studio, but not with the old IDE.\nDistraction free mode. Gets rid of all widgets / windows but the main one. Great in combination with the full screen mode, especially on smaller screens.\nCode completion, tooltips, help integration. Just start typing a GAMS keyword, and Studio will present you with a list of completions. Hover your mouse over a keyword, and Studio will show a small tooltip with a brief explanation of the keyword. If your model contains errors, the tooltip will help you get to the bottom of the problem. Press F1 on a keyword, and Studio will open the relevant section in the GAMS documentation.\nCode folding. Collapse multi-line code sections such as $onText comments, $onEmbeddedCode sections, if statements and more.\nSelectable Text Encoding. You can write GAMS modes using your preferred encoding and include emojis if you wish. GAMS IDE only had support for ISO-8859-1, while Studio defaults to UTF-8.\nDark mode. For the night owls, you can switch Studio to dark mode, or even create your own custom editor themes.\nThere is also block editing, navigation history, bookmarks, help integration, duplicate editor tabs with synchronised scrolling ('pin view'), and much more.\nNot available in GAMS Studio At this stage, there are a few things not available in GAMS Studio, which have been available in the GAMS IDE. One of them is the text diff tool, which we plan to integrate at a later stage. Until then, you can use any of the excellent free or paid diff tools available, e.g. WinMerge on Windows, kdiff3 on Linux, Beyond Compare (not free) on Mac, or the cross-platform meld.\nAnother feature we did not include in Studio is the ability to create graphs directly in the GDX viewer. We feel that other, external tools such as Excel are much better at this, and copying data into Excel to plot them can be done with a couple of clicks.\nTo find out more about GAMS Studio, have a look at the documentation. And as always, if you have the feeling something is not working as expected or there is an important feature missing, please do not hesitate to let us know at support@gams.com ","excerpt":"GAMS IDE has been with us for more than 20 years. It is about time to say goodbye to this part of the GAMS distribution and learn GAMS Studio instead.","ref":"/blog/2022/07/moving-from-gams-ide-to-gams-studio/","title":"Moving from GAMS IDE to GAMS Studio"},{"body":"","excerpt":"","ref":"/categories/roadmap/","title":"Roadmap"},{"body":"","excerpt":"","ref":"/authors/ahomey/","title":"Aileen Homey"},{"body":"","excerpt":"","ref":"/authors/cwestphal/","title":"Clemens Westphal"},{"body":"","excerpt":"","ref":"/categories/features/","title":"Features"},{"body":"We are very excited to introduce a new way of collecting data from a range of external sources, transform it, and make it available to GAMS models. Enter \u0026ldquo;GAMS Connect\u0026rdquo;! Below you can read a short description of this new tool set, and why we think it will be incredibly useful to all GAMS modelers.\nBackground In the realm of software, GAMS is one of the products on the more long-lived end of the spectrum. The first published record of the initial attempts to develop a general algebraic modeling system by Alex Meeraus leads back to 1976 (International Symposium on Mathematical Programming, Budapest, p. 185) The first commercial version of GAMS was available in 1987, and since then many people have contributed to the further development of the GAMS distribution. Since GAMS does not have a module system like more conventional programming languages, a lot of these contributions were submitted as little command line tools in the Unix spirit, e.g. for reading or writing Excel files, CSV files, accessing Microsoft Access databases, interfacing with Matlab, and many more. This system has been working quite well for many years, but it is becoming increasingly difficult to keep all of these tools updated, and also to make them available for all platforms supported by GAMS. Also, the syntax for using the different tools is not uniform and can be confusing to the user. We therefore felt a more modern and unified way of reading and writing data from and to different formats was in order. As a big step in this direction, we can now unveil \u0026ldquo;GAMS Connect\u0026rdquo;. GAMS Connect builds upon the concept of \u0026ldquo;extract, transform, load\u0026rdquo; (ETL), which aims to get data from a range of different sources into a unified, central data storage (the \u0026ldquo;Connect Database\u0026rdquo;), and from there to other formats, with the help of reader agents, transformer agents, and writer agents (Fig 1).\nFig. 1: Multiple Agents share the same central database\nThis concept makes possible a pluggable data import / export system, which is configured via YAML syntax. Currently GAMS Connect supports CSV, GDX, and Excel as external file formats for reading and writing. Here is a simple example:\n- CSVReader: file: distance.csv name: distance indexColumns: [1, 2] valueColumns: [3] fieldSeparator: \u0026#39;;\u0026#39; decimalSeparator: \u0026#39;,\u0026#39; - CSVReader: file: capacity.csv name: capacity indexColumns: [1] valueColumns: [2] - GAMSWriter: symbols: - name: distance newName: d - name: capacity newName: a These lines instruct GAMS Connect to read two CSV files. From the first (distance.csv), we read values from column 3 into a symbol with the name \u0026ldquo;distance\u0026rdquo;, using index values from columns 1 and 2. From the second CSV file (capacity.csv) we read values from column 2 into a symbol named \u0026ldquo;capacity\u0026rdquo;, using index values from column 1. At this stage, those values reside in the GAMS Connect database only, and we can now make them available to GAMS. This is done in the last \u0026ldquo;GAMSWrite\u0026rdquo; block, which creates the symbols \u0026ldquo;d\u0026rdquo; and \u0026ldquo;a\u0026rdquo; from the previously collected data.\nThis way of instructing GAMS Connect to read and write data is obviously very flexible and powerful, and you can find more complex examples in the documentation .\nThe Connect YAML syntax can be utilized in three different places:\nVia GAMS command line parameters \u0026ldquo;ConnectIn\u0026rdquo; and \u0026ldquo;ConnectOut\u0026rdquo; Via embedded code Connect (likely the most common case) Via a standalone command line utility \u0026ldquo;gamsconnect\u0026rdquo; Once data is in the Connect database, and before writing to the GAMS database, you can use the \u0026ldquo;Projection\u0026rdquo; agent to project and aggregate data onto a reduced index space of a GAMS symbol using statistical functions like max, min, mean, median and more. If that is not enough, you can even use Python code inside the YAML instructions to implement very complex data manipulation procedures.\nDesign Decisions During the conception of GAMS Connect, we made a few very deliberate decisions:\nConnect agents are \u0026ldquo;simple\u0026rdquo; by design and each one supports exactly one functionality. The power of GAMS Connect stems from the ability to string multiple agents together via YAML. Connect agents provided by us will be platform independent. All agents will be controlled by a consistent syntax and hence make life easier for our users (they are also case sensitive, which is different from usual GAMS syntax). Code readability is king, therefore we will avoid abbreviations and instead use long, explicit, camelCased parameter names. Fail early: the YAML syntax is validated first, before executing any instructions, to allow early discovery of mistakes. Try it! We are very excited about GAMS Connect and encourage you to try it out for yourself. Update to GAMS 39 and give it a spin! Also make sure to check our new releases for more GAMS Connect functionality in the future. Over time we will implement more agents (e.g. for SQL databases, HTML, Txt and more), integrate Connect with GAMS Studio, and even allow creating your own Connect agents in Python!\n","excerpt":"With GAMS 39, we introduce \u003cem\u003eGAMS Connect\u003c/em\u003e, a new tool to import and export data to and from different sources in a uniform way.","ref":"/blog/2022/05/introducing-gams-connect/","title":"Introducing GAMS Connect"},{"body":"","excerpt":"","ref":"/categories/release-news/","title":"Release News"},{"body":"On April 03-05, 2022, the INFORMS Business Analytics Conference was back as an in-person conference for the first time since 2019. Therefore Adam, Stefan, and Maurice traveled to Houston to attend the conference and represent the GAMS team. Our team held a successful pre-conference workshop as well as a technology tutorial on GAMS ENGINE and GAMS Transfer and had lots of interesting conversations at our booth in the exhibition hall.\nIn addition, GAMS sponsored this year’s Franz Edelman Award and attended the Award ceremony. Once more, the work of all finalists was very impressive, and we would like to congratulate them again for their accomplishments. It was a very well-organized conference with plenty of interesting talks and workshops to attend.\nOur Technical Workshop Model deployment and data wrangling with GAMS Engine and GAMS Transfer Presented by: Adam Christensen \u0026amp; Stefan Mann\nThe right tools help you deploy your GAMS model and maximize the impact of your decision support application.\nGAMS Engine is a powerful tool for solving GAMS models, either on-prem or in the cloud. Engine acts as a broker between applications or users with GAMS models to solve and the computational resources used for this task. Central to Engine is a modern REST API that provides an interface to a scalable, containerized system of services, providing API, database, queue, and a configurable number of GAMS workers. GAMS Engine is available as a standalone application, or as a Software-As-A-Service solution running on AWS.\nGAMS Transfer is an API (available in Python, Matlab, and soon R) that makes moving data between GAMS and your computational environment fast and easy. By leveraging open source data science tools such as Pandas/Numpy, GAMS Transfer is able to take advantage of a suite of useful (and platform independent) I/O tools to deposit data into GDX or withdraw GDX results to a number of data endpoints (i.e., visualizations, databases, etc.).\n","excerpt":"The INFORMS Business Analytics Conference was back as an in-person conference for the first time since 2019. Therefore Adam, Stefan, and Maurice traveled to Houston to attend the conference and represent the GAMS team.","ref":"/blog/2022/04/informs-business-analytics-in-houston/","title":"INFORMS Business Analytics in Houston"},{"body":"At the heart of the GAMS Engine is a REST API that allows you to communicate via HTTP requests to submit jobs, query job status, retrieve results, invite users, add namespaces, etc. Everything is managed via this API. With the most recent release 22.04.12, Engine now supports sending HTTP requests back to you when a job finishes. This means that you no longer need to poll Engine to find out when a job is finished, but instead make Engine tell you. This can be used to get notified via business communication tools such as Slack, Microsoft Teams or Discord, or to seamlessly integrate GAMS jobs into your applications.\nThis feature is disabled by default, but can be enabled either for administrators only or for everyone.\nIn addition, GAMS Engine 22.04.12 also supports assigning a custom tag to a job, a human-readable string to identify a job; jobs can now be shared with user groups so that others can access them; and you can protect your model files from being overwritten to preserve the intellectual property of your models.\nAnd of course, all these new features are also supported by the Engine web interface.\nCheck out the release notes to find out more!\n","excerpt":"Engine now supports webhooks to notify you when a job is finished.","ref":"/blog/2022/04/engine-now-talks-to-you/","title":"Engine now talks to you"},{"body":"","excerpt":"","ref":"/authors/fproske/","title":"Freddy Proske"},{"body":"","excerpt":"","ref":"/authors/epanos/","title":"Evangelos Panos"},{"body":"","excerpt":"","ref":"/categories/examples/","title":"Examples"},{"body":"","excerpt":"","ref":"/categories/gams-applications/","title":"GAMS Applications"},{"body":" Image credit: https://ibmdecisionoptimization.github.io Energy system models (ESMs) are mathematical programs that represent the energy systems of countries or regions. Researchers use ESMs to investigate long-term scenarios describing the evolution of all sectors of the energy system of a given country or group of countries over the 21st century. These sectors can include buildings, transportation, industry, power generation, agriculture and more. ESMs, and energy systems analysis research, are well embedded in energy strategy deliberations worldwide and allow for more informed planning of capacity expansion and other investments by calculating the time evolutions of the whole system under different assumptions, e.g. development of CO2 price, availability of renewable energy, future demand, energy and climate change mitigation policies, etc.\nMany ESMs are implemented in GAMS1. A model with decades of history is the TIMES energy systems modeling framework,which is developed by the Energy Technology Systems Analysis Programme (ETSAP) of the International Energy Agency (IEA) . TIMES is used by at least 50 countries worldwide and more than 120 research institutes, companies and governmental agencies.\nAs a very active research area, energy system modeling is challenging for even the best state-of-the-art solver algorithms. The increasing complexity of energy systems, e.g. due to the increasing shares of renewable energy, and the trend towards building models with higher levels of granularity on the one hand and increasing time horizons on the other hand, frequently result in challenging large-scale problems. When dealing with such problems, the targeted use of solver options can often lead to dramatic performance improvements. While the solvers used in GAMS include sophisticated heuristics to find a suitable parameterization of the solution algorithm, model developers can improve the solver’s performance by combining their knowledge on the model’s structure with the comprehensive information provided by GAMS in the output log and .LST files, in order to tweak some of the solver’s options.\nAs a concrete example, the recent ETSAP Webinar “CPLEX Barrier Options for TIMES models” provides an extensive overview on how to analyze and tune the CPLEX Barrier algorithm for a particular TIMES model, thereby showing the great potential for performance improvements when dealing with challenging large-scale LPs. This webinar provides quite the right amount of background information to provide modelers with useful insights to the complex internals of state-of-the-art optimization algorithms and puts those insights in relation to various potentially useful solver options whose impact on the solution time is worthwhile to explore. The insights and hints are by no means restricted to TIMES models but provide a useful cookbook to tune the widely used CPLEX barrier algorithm for large-scale LPs.\nWatch the video here:\n(The video will open on youtube.com) Download the presentation slides here: https://iea-etsap.org/webinar/CPLEX%20options%20for%20running%20TIMES%20models.pdf See e.g. https://wiki.openmod-initiative.org/wiki/Open_Models#Overview_of_models_by_type.2C_software.2C_implementation_and_processing \u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","excerpt":"A recent webinar by Evangelos Panos demonstrates how to tune CPLEX options to get more bang for your buck when running large problems such as the TIMES ESM framework.","ref":"/blog/2022/04/how-to-tune-cplex-options-for-times-models/","title":"How to Tune CPLEX Options for TIMES models"},{"body":"Custom Data Connectors In the last update MIRO Server was given a REST API, which (among other things) can be used to provide or retrieve scenario data from outside the app. With the next release we go one step further: MIRO now supports custom data connectors . This new feature enables the import and export of scenario data from within the app using custom implemented routines. With this you have the following advantages:\nWith self-written data connectors you can support any data format and mechanism for import and export (as long as it can be implemented in R). Data can be processed as desired before import/export. Works not only for MIRO Server, but also for MIRO Desktop applications. Custom data connectors are seamlessly integrated into a MIRO application. Example: Let’s assume the input data for a GAMS model should come from an external source, e.g. an ERP system. A custom data connector connects to that system and prepares the data to be compatible with MIRO. The user can now load the data directly from the ERP system via MIRO\u0026rsquo;s standard import dialog.\nAfter solving the model, two things should happen: Firstly, all results are to be loaded into a BI system for further processing. In another custom data connector the scenario data is first analyzed and based on this a PDF report is generated, which can be downloaded. The integration of MIRO into existing IT infrastructure is made even easier by the new data connectors! Signing MIRO Apps Another novelty is that it is now possible to sign MIRO apps . This makes it easier for users to ensure that the MIRO app they are about to use has been created by a verified developer. While unsigned apps can always be installed on MIRO Desktop, MIRO Server administrators can optionally prohibit the addition of apps that are not signed by trusted developers.\nColor Themes Some users have expressed the wish to have more flexibility regarding the color style of a MIRO application. Since MIRO 2.2 you can therefore define custom stylesheets . With the current release we added 5 completely new color themes, which can be used besides the default theme. All themes are available in both light and dark mode and can be used with MIRO Desktop and MIRO Server . If you think there are other themes missing, let us know!\nCheck out the release notes to see all the other new features!\n","excerpt":"Connect MIRO with external sources and exchange data in any format - this is now possible with the new MIRO 2.3, which also shines in new colors.","ref":"/blog/2022/03/new-features-in-miro-2.3/","title":"New features in MIRO 2.3"},{"body":"","excerpt":"","ref":"/authors/rschuchmann/","title":"Robin Schuchmann"},{"body":" Area: Out-Of-Home-Advertising\nProblem class: MILP\nAllocating Out-of-Home Advertising Campaigns Company Introduction VIOOH is a leading global digital out of home (OOH) marketplace. Launched in 2018 and with headquarters in London, VIOOH’s platform connects buyers and sellers in a premium marketplace, making OOH easily accessible.\nVIOOH is a leading out of home advertising expert Led by a team of digital OOH and programmatic tech experts, VIOOH is pioneering the transformation of the OOH sector, championing its role in enhancing omni-channel digital campaigns through the use of programmatic capabilities and data. VIOOH currently trades programmatically in 15 markets, with more to follow.\nProblem VIOOH empowers media owners in the construction of Out of Home marketing campaigns for their clients. Media owners own a large inventory of advertising panels which they want to efficiently manage while satisfying as much as possible the business requirements of their clients.\nPut simply, the problem to solve is matching a subset of traditional “static” and/or digital panels (inventory) for each new incoming campaign that satisfies the requirements.\nThe optimum panel allocation is the problem to solve We can see this as a combinatorial problem in mathematics. There are many combinations of panels but we want a selection that best fits the business objectives. Yet it can be very complex to satisfy all the objectives.\nFor instance, a typical list of requirements could be:\nCampaign broadcast on 100 panels Campaign seen by at least 2M people Maximal budget: £100K Geographical repartition: 50% of the panels in London 30% in Manchester 20% in Bristol Proximity targeting: at least 80% of the panels in a radius of 500 meters to a fast food restaurant Maximize impacts on the 24-34 years old population In order to tackle these requirements, the problem is expressed as an optimization program. The requirements are defined either as objectives to maximize or constraints to obey. As media owners continuously simulate campaigns and expect a solution in a few seconds, speed is one of our main challenges.\nOptimization Program Objectives The objectives can either be explicit or implicit in nature.\nExplicit objectives are the quantities expressed by the media owners or their clients. For instance, the number of panels is a quantity that we try to reach as close as possible. These quantities are measurable, we can easily assess the quality of the results. The implicit objectives on the other hand do not have quantifiable targets. For example, geographical spread is one of them. We try to spread the panels across a territory as much as possible. Constraints There are two types of constraints:\nSome are “soft” constraints, i.e. it is allowed to violate them to avoid infeasibility issues, and the media owner will evaluate the solution and decide whether or not to accept it. “Hard” constraints must be respected. For instance, in out-of-home advertising there are very strict prohibitions rules. One example of this is no alcohol advertising next to a school. Another is that we cannot have two competing clients on the same panel (digital panels typically host up to 6 advertisers within the same 1 hour slot, on a rotational basis). These constraints are of higher priority than any objectives. The objectives might be degraded to fulfill them. At a glance, the program is composed of 23 sets, 46 parameters, 18 blocks of variables and 29 blocks of equations. The size of each problem depends on the inventory of the media owner and the number of campaigns to allocate.\nThe VIOOH Tech Stack The VIOOH platform is composed of microservices. The UI and the backend application is in a common language (Javascript \u0026amp; Java). The optimization program is encapsulated in a Python API. The API receives all the campaign data (duration, eligible inventory, etc.) from the front-end application in a JSON format. The data preparation and creation of the GDX files are performed using the GAMS Python API. The optimization program is written in a GAMS file\nThe optimization solution is implemented as a GAMS model and communicates with the custom frontend Conclusion A simpler approach could have been to use a “greedy algorithm” to solve the allocation problem. This style of approach would mean that not every possible solution is evaluated but rather a seemingly good solution can be found by ranking panels against the objectives and then choosing them one by one according to that order. However, as the complexity of campaigns has increased over time and continues to do so, it is not possible for this style of algorithm to balance all of the objectives and still find quality solutions in all cases. Using an optimization program is a real game changer and with this VIOOH are able to solve very complex marketing needs in a timely manner.\nAbout VIOOH VIOOH is a leading global digital out of home (OOH) marketplace. Launched in 2018 and with headquarters in London, VIOOH’s platform connects buyers and sellers in a premium marketplace, making OOH easily accessible.\nWant to know more about VIOOH? Visit their website at https://www.viooh.com ","excerpt":"VIOOH is a leading global digital out of home (OOH) marketplace. Launched in 2018 and with headquarters in London, VIOOH’s platform connects buyers and sellers in a premium marketplace, making OOH easily accessible.","ref":"/stories/viooh/","title":"Allocating Out-of-Home Advertising Campaigns"},{"body":" Area: Agriculture, Dairy\nProblem class: LP / QP\nOptimal Efficient Price Discovery in Dairy Trading Events Introduction Fonterra Dairy Co-Operative Group Limited, the world\u0026rsquo;s leading dairy exporter, needed to improve price transparency, forward price information and price risk management for its customers. This would also benefit all participants in the dairy industry.\nCRA International, Inc. d/b/a Charles River Associates (CRA) designed and developed the Global Dairy Trade (GDT) electronic trading platform. The trading mechanism is based on sound economics and auction theory, as well as CRA\u0026rsquo;s extensive experience in bidding mechanisms and market design. CRA has also been the independent trading manager for the trading events (starting with the first trading event in 2008), where sellers (including Fonterra) and buyers from around the world participate in the trading event auctions.\nTransaction volume and value have reached 10 million metric tons and US$34 billion respectively over more than 300 trading events to date.\nProblem Description Similar to oil refineries processing crude oil, dairy processing plants process fluid milk into intermediate products — for international trade, generally into powdered products. Examples of globally traded dairy products include whole milk powder, skim milk powder, and anhydrous milk fat.\nThis presents an optimization opportunity in the dynamic market for global dairy products: What are the optimal auction market design and pricing algorithm that will support an efficient price discovery process to reveal competitive equilibrium clearing prices reflective of current and expected future supply and demand conditions in the dairy industry?\nCRA designed and developed both the simultaneous multiple-round ascending-price clock auction for GDT as well as a sophisticated price adjustment algorithm that reflects the complexities, constraints, and opportunities in achieving competitive equilibrium prices.\nKey challenges include:\nAs many as 100 or more dairy products and spot and forward contracts are offered for trade simultaneously — because they are related as substitutes and/or complements — by several sellers to hundreds of buyers. Competitive equilibrium clearing prices need to be discovered within a fairly short period of time (2 to 2.5 hours) via a series of bidding rounds. CRA’s Solution In addition to the auction format and rules, CRA designed and developed a parameterized, sophisticated price adjustment algorithm that provides for an efficient and quick price discovery mechanism. The following diagram is an overview of the workflow.\nFig 1. CRA’s TSEM™: GAMS Workflow (MCG: Milk Components Group; PIOM: Price Increment Optimization; DSUB: Demand-side SUBstitutability) At the end of each bidding round, the algorithm takes as input various supply parameters (quantities, reserve prices, cost parameters) provided by each supplier (which reflect their processing capabilities, constraints, and supply-side substitutability opportunities). The algorithm also takes as input information about demand levels, preferences, and sensitivities (updated dynamically and endogenously during the auction via a learning process).\nThen GAMS solves a series of models. First, an optimal solution is found for the \u0026ldquo;Milk Components Group\u0026rdquo; model, and this result is used as input for the \u0026ldquo;Price Increment Optimization\u0026rdquo; model, for which an optimal solution is found, and this result is used as input for the \u0026ldquo;Demand-side Substitutability\u0026rdquo; model, for which an optimal solution is found. The result is a vector of price increments and prices to support the simultaneous multiple-round ascending-price clock auction process to achieve competitive equilibrium prices for all products and contracts simultaneously and quickly over a series of bidding rounds. The efficient price discovery mechanism supports, and is reflective of, each seller’s manufacturing capabilities and costs, and each buyer’s preferences over substitutable products (via price arbitrage) and complementary products (via preferred portfolios and combinations).\nCRA’s algorithm and the implemented GAMS model are generic, flexible, and scalable via a rich and robust set of parameters.\nParticipation in GDT trading event auctions has grown from one seller and a handful of products and contracts to several sellers and one hundred or more products and contracts. The algorithm and GAMS model continue to perform exceptionally well as GDT trading events have grown.\nCRA’s algorithm is a component of CRA’s trading platform, Trading System for Efficient Markets (TSEM™). In addition to its use for CRA’s GDT trading event auctions, TSEM™ also is applicable for other CRA clients and industries. The following is an overview of how CRA’s algorithm and models are integrated into the IT ecosystem of CRA’s TSEM™ trading platform.\nFig 2. CRA’s TSEM™: Integration of GAMS in IT Ecosystem The following chart shows how the price increment algorithm solved by the GAMS models determines optimal price adjustments across bidding rounds to converge smoothly to supply-demand equilibrium, discovering competitive market clearing prices.\nFig 3. CRA’s TSEM™: Fast Convergence to a Competitive Supply-Demand Equilibrium. As prices (solid lines) increase, demand (height of vertical bars) falls, until a bidding round is reached (in this case, round 8) when demand (height of vertical bars) falls within the supply range (dashed horizontal lines) – i.e., convergence to competitive supply-demand equilibrium. Different colors refer to different product specs (grades) within the same product group or sales group – there are constraints and optimization at multiple, interdependent levels of supply simultaneously given the supply-demand structure. This chart shows only a small subset of the approx 100 different products.\nAbout CRA and its Auctions \u0026amp; Competitive Bidding Practice CRA International, Inc. d/b/a Charles River Associates is a global consulting firm specializing in economics, financial, regulatory, litigation, and management consulting. CRA guides corporations through critical business strategy and performance-related issues. Since 1965, clients have engaged CRA for its unique combination of functional expertise and industry knowledge, and for its objective solutions to complex problems. Headquartered in Boston, CRA has offices throughout the world. Detailed information about Charles River Associates, a registered trade name of CRA International, Inc., is available at www.crai.com . Follow us on LinkedIn, Twitter, and Facebook.\nCRA’s Auctions \u0026amp; Competitive Bidding Practice offers businesses, governments, bidders, and other market participants extensive experience in auction and market design, implementation, monitoring, and participation. More information about CRA’s Auctions \u0026amp; Competitive Bidding Practice is available at www.auctions.crai.com .\n","excerpt":"Global Dairy Trade trading event auctions are the leading global online marketplace for trading large volume dairy ingredients and for reference price discovery. The GDT Trading Events and GDT Pulse Auctions are powered by CRA’s Trading System for Efficient Markets (TSEM™) platform and rely on a series of GAMS models.","ref":"/stories/cra/","title":"Optimal Efficient Price Discovery in Dairy Trading Events"},{"body":"Our long time partner Dr. Nick Sahinidis, CEO of The Optimization Firm and Developer of the well known BARON Optimization Solver , has been elected to the National Academy of Engineering for \u0026ldquo;his contributions to global optimization and the development of widely used software for optimization and machine learning.\u0026rdquo;\nMany GAMS users have benefited from his work on BARON over the years, and we would like to congratulate Dr. Sahinidis for this well deserved recognition!\n","excerpt":"We congratulate the CEO of The Optimization Firm and developer of the BARON solver.","ref":"/blog/2022/02/nick-sahinidis-elected-to-national-academy-of-engineering/","title":"Nick Sahinidis Elected to National Academy of Engineering"},{"body":"Last year we released our model deployment solution GAMS Engine One, which allows scheduling and running jobs on a central compute server via a REST API .\nEngine One is the perfect add-on for a machine based GAMS license. It makes development and integration of complex modeling scenarios into IT environments much easier than before, and helps reduce development time and cost.\nIn the meantime, our developers took it up one notch, and we are very excited to present GAMS Engine SaaS, which transfers the principles and existing benefits of GAMS Engine One to the cloud. Based on Kubernetes, Engine SaaS makes use of the compute infrastructure offered by Amazon Web Services (AWS). As a consequence Engine SaaS has some major added benefits:\nHorizontal Auto Scaling Each GAMS job on Engine SaaS runs on it\u0026rsquo;s own virtual EC2 instance, or node in Kubernetes speak. In practical terms, AWS is able to provide an unlimited number of EC2 instances, so if you need to run one job, or 100 jobs in parallel, the infrastructure will scale automatically and without any complex configuration required on your part.\nInstance Sizing In addition to horizontal scaling, EC2 offers a wide range of instance types, both in terms of memory and CPU. We can therefore offer instances with as little as 16GB, up to a huge 4TB, and many sizes in between. The user can select a different instance type for each job, if that is required. We currently use the AWS z1d type for our smaller instances (up to 192GB), and the x1e types for everything above, up to 4TB.\nDepending on your requirements, we can also add different instance types, for example if memory is not so important, but the number of cores is.\nZero Maintenance and High Reliability We take care of operating the infrastructure for our customers, including keeping the software up to date. Our customers can focus on their end user application, and the optimization part \u0026ldquo;just works\u0026rdquo;, backed by our expertise and by the AWS infrastructure with it\u0026rsquo;s high reliability.\nSimplified License Handling IT admins have their own login to Engine SaaS, and can add an unlimited number of individual users. Each of those users will automatically inherit the same license the admin owns, so there is only one license to be aware of, and nothing the user needs to install.\nA simple web user interface The Engine SaaS web UI allows you to manage users, groups, namespaces, and jobs in a simple and intuitive way. This is great for getting started and getting a quick overview of what is happening in your account. You can also do all of those things directly using the Engine API , some of which I will demonstrate further down.\nPricing Considerations If you have any questions regarding GAMS Engine, don\u0026rsquo;t hesitate to contact us at sales@gams.com . We are happy to schedule a demo for you and discuss your individual use case.\nAmong other things, we will then discuss with you your requirements in terms of compute hours and instance size requirements:\nWhat are the typical memory requirements for your GAMS jobs? How many hours each year do you expect to run those jobs? Which solvers will you need? With this information we will create a tailor made package offer for you. This offer will include an hourly quota on your preferred instance size, be it 32GB or 4TB. You will always have the option to run jobs on other instance sizes on a case by case basis, giving you total flexibility.\nThe only thing you need to know is that there is a multiplier attached to each instance size, starting with a factor of 1 for the smallest instance, and then going up to higher values with larger instances. The multiplier determines how fast you burn through your quota relative to \u0026ldquo;wall time\u0026rdquo;. The exact multipliers will depend on the solvers you choose to license, and the total annual size of your quota.\nA Simple Example in Python I will use a contrived example to show the bare minimum of what you can do with GAMS Engine, in order to give you an idea of how straight-forward it is to use (if you know some Python and understand the requests library). You will see that it takes only a few lines of code to get started!\nAssume we have a user called jon_doe registered on Engine SaaS. This user has access to the namespace tests, and to two different instance sizes with 16GB and 32GB labelled \u0026lsquo;GAMS_z1d.large_A\u0026rsquo; and \u0026lsquo;GAMS_z1d.xlarge_A\u0026rsquo;, respectively.\nWe will use the requests library and a Jupyter Notebook to explore the REST API, and submit a simple GAMS job and fetch the results.\nAuthentication First we need to import the request library, and take care of authenticating our user:\nimport requests from requests.auth import HTTPBasicAuth import time au = HTTPBasicAuth(\u0026#34;john_doe\u0026#34;,\u0026#34;some_password\u0026#34;) url = \u0026#34;https://engine.gams.com/api\u0026#34; You will have received your username and password in an email from our sales team after signing up for Engine Saas.\nMaking the First Request We can then query the API to get information about the instances our user has access to. The results are available in json format. The value for cpu_request corresponds to the number of vCPUs available in each instance, and is slightly lower than the nominal value you will find on the AWS homepage. This is because the Engine software stack (in particular Kubernetes) requires some resources to work properly. The same is true for the memory_request values, which correspond to the available memory and also reserve a small proportion to Kubernetes. The workspace_request values show the amount of disk space (50 GB) available for each job. Finally, multiplier is the factor that determines how fast an instance type will consume the quota compared to wall time.\nr = requests.get(url + \u0026#39;/usage/instances/john_doe\u0026#39;, auth=au) r.json() {'instances_inherited_from': 'john_doe', 'default_inherited_from': 'john_doe', 'instances_available': [{'label': 'GAMS_z1d.large_A', 'cpu_request': 1.8, 'memory_request': 15070, 'workspace_request': 50000, 'node_selectors': [{'key': 'gams.com/instanceType', 'value': 'z1d.large'}], 'tolerations': [], 'multiplier': 1.0}, {'label': 'GAMS_z1d.xlarge_A', 'cpu_request': 3.8, 'memory_request': 30710, 'workspace_request': 50000, 'node_selectors': [{'key': 'gams.com/instanceType', 'value': 'z1d.xlarge'}], 'tolerations': [], 'multiplier': 1.1}], 'default_instance': {'label': 'GAMS_z1d.large_A', 'cpu_request': 1.8, 'memory_request': 15070, 'workspace_request': 50000, 'node_selectors': [{'key': 'gams.com/instanceType', 'value': 'z1d.large'}], 'tolerations': [], 'multiplier': 1.0}} Submitting a Job Lets move on and submit a GAMS job to Engine. We will use the trnsport model, which we have copied into the current directory, and which will need to be zipped before we can submit it. The zipped file is then used in a post request to the API. Also, we will tell Engine to use the GAMS_z1d.large_A instance type.\nThe response contains the job token, which we need to identify our job.\nfrom zipfile import ZipFile with ZipFile(\u0026#39;model.zip\u0026#39;,\u0026#39;w\u0026#39;) as zip: zip.write(\u0026#39;trnsport.gms\u0026#39;) query_params = { \u0026#39;model\u0026#39;: \u0026#39;trnsport\u0026#39;, \u0026#39;namespace\u0026#39;: \u0026#39;tests\u0026#39;, \u0026#39;labels\u0026#39;: \u0026#39;instance=GAMS_z1d.large_A\u0026#39; } # Create dict with model zip file job_files = {\u0026#39;model_data\u0026#39;: open(\u0026#39;model.zip\u0026#39;,\u0026#39;rb\u0026#39;)} r = requests.post(url + \u0026#39;/jobs/\u0026#39;, params=query_params, files=job_files, auth=au) token = r.json()[\u0026#39;token\u0026#39;] When you follow along this example, please make sure to adjust the value for namespace to the one that was sent to you via email by our sales team together with your user information. If you have been invited to Engine SaaS by someone else in your organization, you will have to request this information from them.\nThe job runs asynchronously in the background, and we could now add more jobs or do other things.\nGetting Job Results We have to give Engine SaaS approximately 2 minutes, which is the time it takes to spin up a fresh EC2 instance for our job. This is required only for the first job in a row of successive jobs, because freshly vacated instances will be re-used and will be available immediately.\nLets now check the status of our simple job, by sending a get request.\nr = requests.get(url + \u0026#39;/jobs/\u0026#39; + token, auth=au) r.json() {'token': '842b85cd-d7e2-42dc-8268-cc25ed3d66ce', 'model': 'trnsport', 'is_temporary_model': True, 'is_data_provided': False, 'status': 10, 'process_status': 0, 'stdout_filename': 'log_stdout.txt', 'namespace': 'tests', 'stream_entries': [], 'arguments': [], 'submitted_at': '2022-01-13T15:46:32.749866+00:00', 'finished_at': '2022-01-13T15:48:39.520392+00:00', 'user': {'username': 'john_doe', 'deleted': False, 'old_username': None}, 'text_entries': [], 'dep_tokens': [], 'labels': {'cpu_request': 1.8, 'memory_request': 15070, 'workspace_request': 50000, 'tolerations': [], 'node_selectors': [{'key': 'gams.com/instanceType', 'value': 'z1d.large'}]}, 'result_exists': True} Amongst other information, we can see that the result_exists field reports True, and that the process_status field reports a value of zero, which means the job did finish successfully. We can now download the results in the form of a zip file, by sending another get request. The content field of the return object contains the raw byte string representing the zip file, so we can just write the field to disk \u0026lsquo;as is\u0026rsquo;.\nr = requests.get(url + \u0026#39;/jobs/\u0026#39; + token + \u0026#39;/result\u0026#39;, auth=au) file = open(\u0026#39;results.zip\u0026#39;,\u0026#39;wb\u0026#39;) file.write(r.content) file.close() By default, the zip file contains the GAMS log for the run, a copy of the model file, and the lst file. Here is a section of the log that shows we did indeed successfully solve the model on GAMS Engine:\nIteration Dual Objective In Variable Out Variable 1 73.125000 x(seattle,new-york) demand(new-york) slack 2 119.025000 x(seattle,chicago) demand(chicago) slack 3 153.675000 x(san-diego,topeka) demand(topeka) slack 4 153.675000 x(san-diego,new-york) supply(seattle) slack --- LP status (1): optimal. --- Cplex Time: 0.11sec (det. 0.01 ticks) Optimal solution found Objective: 153.675000 Some Remarks on Security Our development team takes software security serious, especially given the fact that GAMS Engine runs in the cloud. This is why we frequently update the underlying software components to include bug-fixes as soon as they come out.\nBelow is a high level summary of some of the design choices that will keep your data safe.\nGeneral best practices for web applications First up, of course we make use of industry standard encryption methods. This means that all data is transported through the internet TLS encrypted. Your user credentials are stored salted and hashed in the Engine user database (we use the PBKDF2 algorithm for that). Data is AES-256 encrypted at rest in our database, and backups are encrypted as well. Before releasing any updates to GAMS Engine, the code changes are peer reviewed by our team, and additionally scanned for known vulnerabilities and tested in our CI pipeline.\nEngine specific considerations We use AWS EKS , which is a production grade Kubernetes cluster that is professionally managed by AWS, to run our Engine infrastructure. This ensures that data from our different users is properly encapsulated, and user A cannot see under any circumstances what user B is doing. Each job that is scheduled on GAMS Engine is run in a new, isolated, containerized environment, which has never been used before to solve any model. You can however specifically instruct Engine to make job B dependent on job A, in which case the data from job A will be accessible to job B.\nOn top of that, our namespace structure allows you to choose who can access which model, and what they can do with the model. We follow the standard permission system present on unix operating systems (read, write, execute).\nWith Quotas, you can limit usage of individual users in your organization to prevent people from burning through your compute volume too quickly. Quotas can be set for the number of simultaneous jobs started, for the number of hours available to a user, for the types of instances a user is allowed to use, and also for the amount of storage available.\nIf this overview has piqued your interest, do not hesitate to contact us. We are happy to discuss your requirements and give you more information in a personal call.\n","excerpt":"We are excited to present GAMS Engine SaaS, our new cloud based model scheduling system with fantastic scalability. This article gives an overview.","ref":"/blog/2022/01/introducing-engine-saas/","title":"Introducing Engine SaaS"},{"body":"The Global Trade Analysis Project (GTAP) , housed at the Department of Agricultural Economics at Purdue University, is an organization with the aim of lowering the cost of entry for conducting quantitative analyses of international economic issues in an economy-wide framework.\nApart from publishing a multi-region, applied general equilibrium model (the \u0026ldquo;standard model\u0026rdquo;, implemented with GEMPACK) 1, GTAP also offers a database of economic data. This database combines detailed bilateral trade, transport and protection data with individual country \u0026ldquo;input-output\u0026rdquo; databases to account for economic linkages among regions and inter-sectoral linkages within regions.\nThere are many modeling efforts that make use of the GTAP database, and many of these models are implemented in GAMS. The table below gives an overview of some of these models:\nModel Developing Organization Model Focus Linkage The World Bank Global Trade Policy Analysis Envisage The World Bank Economics of Climate Change Mirage CEPII Trade Policy Analysis EPPA MIT Climate and Environmental Impact Projections GLOBE CGEMOD Multiple Submodels to Analyse Labour Markets, Migration, and Energy Env-Linkages OECD Linking Economic Activity to Greenhouse Gases AIM The AIM Program Asia Pacific Regional Model of Climate Change Impact and Greenhouse Gas Emissions GTAPinGAMS Thomas Rutherford, University of Wisconsin Multiregional and Small Open Economy Models using the GTAP Data Base The Standard GTAP Model in GAMS Center for Global Trade Aanalysis Translation of the GTAP Model V7 in GAMS Until recently, users of those models had to convert the GTAP database files in a slightly complicated process outlined here: https://www.youtube.com/watch?v=Raok9keYFg4 .\nBut due to the heavy use of GAMS in the field of economic modeling, GTAP has decided to now also make their database available in GDX format, which allows direct use with GAMS.\nWe would like to thank the center for adding this option and making life a bit easier for GAMS users.\nGEMPACK is developed at the Centre of Policy Studies (CoPS) at Victoria University\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","excerpt":"The database provided by the Global Trade Analysis Project (GTAP) is now also available in GDX format.","ref":"/blog/2021/12/gtap-database-in-gdx-format/","title":"GTAP Database in GDX format"},{"body":"","excerpt":"","ref":"/categories/modeling-news/","title":"Modeling News"},{"body":"","excerpt":"","ref":"/categories/announcement/","title":"Announcement"},{"body":"On December 09, a vulnerability of Apache Log4j (a logging tool used in many Java based applications) was disclosed, which could allow remote unauthenticated attackers to execute code on vulnerable systems. The vulnerability is tracked as CVE2021-44228, and is also known as \u0026ldquo;Log4Shell\u0026rdquo;.\nWhen the announcement was made public late last week, we - like everyone else - were quite alarmed. Our web server logs first started showing signs of automated scans for the vulnerability on Dec 10, just one day after the disclosure of CVE2021-44228. Over the last few days we have conducted a thorough review of our entire codebase and all tools used for our work, and we can report that to the best of our knowledge the vulnerability does not affect us, or any of the products we ship.\nThe GAMS Java API, both expert-level and OO-level, does not use Log4j Our internal Jenkins server does use Java, but is unaffected (see this statement ). The Apache Solr Search Engine we use on our website has been configured to not use the JndiLookup.class from the Log4j package. Our Engine SaaS service is not affected ","excerpt":"The recent Log4j security vulnerability \u0026lsquo;Log4Shell\u0026rsquo; does not affect any of our products or services.","ref":"/blog/2021/12/statement-regarding-log4j-security-alert/","title":"Statement regarding Log4j security alert"},{"body":"A New REST API for MIRO Server GAMS MIRO is a graphical interface to your GAMS models. Consequently, it is mainly intended for interactive use. With GAMS MIRO Server (introduced with MIRO 2.0 ), we have added a powerful collaboration platform to the MIRO universe. Scenario data and apps can be shared among users and user groups. Similar to MIRO Desktop, administrators can add new MIRO apps or update existing ones via a graphical admin interface.\nBut what if you wanted to automate things and deploy your MIRO applications to MIRO Server as part of your CI/CI pipeline? What if you could push new forecast data to MIRO Server as it becomes available in your forecasting software? Or how about importing your optimization results directly into your BI system? With MIRO 2.2 all this becomes possible thanks to an all new REST API!\nAll operations that create, read, update or delete MIRO apps as well as MIRO scenarios are supported. This way MIRO Server can be easily integrated into your existing infrastructure and interact with upstream and downstream systems. Want to learn more? Read our (technical) API documentation .\nBetter Support of Mobile Devices Another very visible improvement is that the display of MIRO apps on small screens has been significantly improved. While using older versions of MIRO on a smartphone was possible, but not very nice, you can now use your apps quite comfortably from a smartphone or tablet. You can trigger your GAMS jobs, analyze data, compare scenarios, or simply present your results to others while on the go. Anytime, always available.\nIn addition, you can add a so-called web manifest to an app. This allows MIRO to be used as a progressive web app on a mobile device. In this mode, MIRO will behave almost like any other regular app on your device.\nThe screen recording below shows what this looks like1 : This update enables MIRO users to benefit even more from the advantages of cloud-based applications. However, these are just a few of the many new features in MIRO 2.2. For a complete list, see the MIRO 2.2 Release Notes . Give it a try!\n1: Mobile device used: iPhone 8\n","excerpt":"The new MIRO 2.2.0 comes with a number of features and improvements that allow users to take even more advantage of cloud-based applications.","ref":"/blog/2021/12/miro-server-deserves-a-rest/","title":"MIRO Server Deserves a REST"},{"body":"Since the pandemic situation started and GAMS was forced to cancel the christmas celebration in 2020, the whole team was happy to come back together again in 2021 in-person. The GAMS offices are geographically distributed between the US and Germany, but this time it was possible to welcome Steve Dirkse in Braunschweig. Again we had a great time cooking together at Henk Mulder\u0026rsquo;s cooking school - working together but in the kitchen and not in the office. Even not everyone could attend, fortunately all worked well out and it was a pleasure to see how the GAMS team has grown over the years.\nHave a look at the pictures.\nHappy Christmas from the GAMS teams!\n\u0026times; Previous Next Close ","excerpt":"GAMS had an early christmas celebration in November. The whole team was happy to attend in person and it was even possible to have Steve from the US at our party in Braunschweig.","ref":"/blog/2021/11/the-2021-gams-software-christmas-celebration/","title":"The 2021 GAMS Software Christmas Celebration"},{"body":"","excerpt":"","ref":"/authors/achristensen/","title":"Adam Christensen"},{"body":"The object oriented APIs that come with every GAMS installation are a great way to seamlessly integrate GAMS modeling into existing applications and IT environments. You can choose from .NET, C++, Java, Python, and Matlab. The last two from this list are particularly popular with those who need to analyse data in an exploratory and interactive fashion.\nIn the Python community, Pandas is a commonly used package that allows convenient storing and manipulation of data, with advanced operations for indexing and slicing, reshaping, merging and visualization of data.\nIn Matlab, the builtin matrix, table and struct formats are the commonly used data structures to manipulate data.\nFor both Python and Matlab, the existing GAMS APIs are very powerful and feature complete, but working interactively with GAMS data can be tedious. We have therefore started a new project called GAMS Transfer (part of GAMS 37), with the aim to create an API dedicated to data exchange between GAMS and other languages, starting with Python and Matlab.\nIn the GAMS Transfer project we focus on several key points:\nSpeed: Performance is critical for large datasets Convenience: The API must be intuitive to use and use environment specific data formats Consistency: Use of analogous syntax across different environments Our team first presented GAMS Transfer to the public at the 2021 Informs Annual Meeting. .\nA key element of GAMS Transfer is the concept of a container, which is the repository that holds all data. Data within this container is linked together, which enables data operations like implicit set growth, domain checking, data format transformations (to dense/sparse matrix formats), etc. Those concepts are explained in more detail in the documentation for Python and for Matlab .\nBelow we will use a simple example to demonstrate how the GAMS Transfer API integrates seamlessly with Python. We will\nwrite some data to a GDX file with GAMS Transfer, start a GAMS job that will use the created GDX file (using the traditional GAMS Python API), and then read the results back into a Python dataframe with GAMS Transfer and plot the data on a map. The point here is not to explore all aspects of GAMS Transfer, but instead highlight how easy it is to get started.\nAn Example Using the TRANSPORT Model This simple example is based on the TRANSPORT model from our model library. For the example, we modify the model to load the Set and Parameter data from the GDX file we will produce with GAMS Transfer:\nGAMS Model 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 $eolCom # $gdxIn input_data.gdx # Open the GAMS Transfer GDX file for input Set i \u0026#39;canning plants\u0026#39; j \u0026#39;markets\u0026#39;; $load i j # Load set members from GDX Parameter a(i) \u0026#39;capacity of plant i in cases\u0026#39; b(j) \u0026#39;demand at market j in cases\u0026#39;; $load a b # Load Parameter data from GDX Table d(i,j) \u0026#39;distance in thousands of miles\u0026#39;; $load d # Load distances from GDX Scalar f \u0026#39;freight in dollars per case per thousand miles\u0026#39;; $load f # Load cost scalar Parameter c(i,j) \u0026#39;transport cost in thousands of dollars per case\u0026#39;; c(i,j) = f*d(i,j)/1000; # This calculation uses the loaded GDX data! $gdxin # close the gdx file # The rest of the model does not need to be modified Variable x(i,j) \u0026#39;shipment quantities in cases\u0026#39; z \u0026#39;total transportation costs in thousands of dollars\u0026#39;; Positive Variable x; Equation cost \u0026#39;define objective function\u0026#39; supply(i) \u0026#39;observe supply limit at plant i\u0026#39; demand(j) \u0026#39;satisfy demand at market j\u0026#39;; cost.. z =e= sum((i,j), c(i,j)*x(i,j)); supply(i).. sum(j, x(i,j)) =l= a(i); demand(j).. sum(i, x(i,j)) =g= b(j); Model transport / all /; solve transport using lp minimizing z; Now let\u0026rsquo;s get to using GAMS Transfer with Python. First, we need to import a few packages. Apart from GAMS Transfer itself we will use Pandas dataframes in the example. Also, we have a couple of helper functions (get_locations, calculate_distances) that will calculate distances between cities. The listing of geo.py will be included at the bottom of this post.\nimport gamstransfer as gt import pandas as pd import os from geo import get_locations, calculate_distances working_dir = os.getcwd() model_name = \u0026#34;trnsport_gamsxfer_gdx\u0026#34; # The name of the GAMS model file Geographical locations are retrieved for a list of production plant cities, and for a list of market cities:\nplants = [\u0026#39;seattle\u0026#39;,\u0026#39;san-diego\u0026#39;] markets = [\u0026#39;new-york\u0026#39;,\u0026#39;chicago\u0026#39;,\u0026#39;topeka\u0026#39;,\u0026#39;denver\u0026#39;] plant_locations = get_locations(plants) market_locations = get_locations(markets) The distances between plants and markets are calculated and used to populate a Pandas dataframe\ndistances = pd.DataFrame(data=calculate_distances(plant_locations,market_locations), columns = [\u0026#39;from\u0026#39;, \u0026#39;to\u0026#39;, \u0026#39;distance (1000 mi)\u0026#39;]) distances from to distance (1000 mi) 0 seattle new-york 2.408121 1 seattle chicago 1.737659 2 seattle topeka 1.457841 3 seattle denver 1.021329 4 san-diego new-york 2.432916 5 san-diego chicago 1.734903 6 san-diego topeka 1.278299 7 san-diego denver 0.833715 We also have to add production capacity (cap) of each plant, and demand (dem) for each market:\ncap = pd.DataFrame([(\u0026#39;seattle\u0026#39;,650),(\u0026#39;san-diego\u0026#39;,800)], columns = [\u0026#39;Plant\u0026#39;,\u0026#39;Num Cases\u0026#39;]) cap Plant Num Cases 0 seattle 650 1 san-diego 800 dem = pd.DataFrame([(\u0026#39;new-york\u0026#39;, 325),(\u0026#39;chicago\u0026#39;, 300),(\u0026#39;topeka\u0026#39;, 275),(\u0026#39;denver\u0026#39;,400)], columns = [\u0026#39;Market\u0026#39;,\u0026#39;Num Cases\u0026#39;]) dem Market Num Cases 0 new-york 325 1 chicago 300 2 topeka 275 3 denver 400 Now we can see the beauty of gamstransfer in action. We add the sets and parameters to a gamstransfer “container”, using the same symbol names present in our GAMS model. Note that the records for each symbol are populated using the lists and dataframes we defined above. This feature makes working with gamstransfer feel very natural in Python (the same applies to Matlab). As the final step, we write the container to disk as a GDX file.\nm = gt.Container() i = m.addSet(\u0026#39;i\u0026#39;, records = plants, description = \u0026#39;Plants\u0026#39;) j = m.addSet(\u0026#39;j\u0026#39;, records = markets, description = \u0026#39;Markets\u0026#39;) a = m.addParameter(\u0026#39;a\u0026#39;, domain = i, records = cap, description = \u0026#39;Capacity\u0026#39;) b = m.addParameter(\u0026#39;b\u0026#39;, domain = j, records = dem, description = \u0026#39;Demand\u0026#39;) d = m.addParameter(\u0026#39;d\u0026#39;, domain= [i,j], records = distances) f = m.addParameter(\u0026#39;f\u0026#39;, records = 90, description = \u0026#39;Transport cost k$ / case\u0026#39;) m.write(os.path.join(working_dir,\u0026#39;input_data.gdx\u0026#39;)) We can now run the GAMS model, using the GDX file we just produced as an input. Since gamstransfer is a pure data API, we must use the standard GAMS Python API to run the model. The model results are saved as model_name.gdx.\n# Use the GAMS Python API from gams import * # Create a GAMS workspace workspace = GamsWorkspace(debug=DebugLevel.Verbose, working_directory=working_dir) # Run our model job = workspace.add_job_from_file(os.path.join(working_dir,model_name + \u0026#39;.gms\u0026#39;)) job.run() # Save GDX file job.out_db.export(os.path.join(working_dir,model_name + \u0026#39;.gdx\u0026#39;)) The shortened GAMS log output shows that we have found an optimal solution to our problem:\n[...] Iteration Dual Objective In Variable Out Variable 1 30.013745 x(san-diego,denver) demand(denver) slack 2 100.451280 x(seattle,new-york) demand(new-york) slack 3 147.293655 x(san-diego,chicago) demand(chicago) slack 4 178.931551 x(san-diego,topeka) demand(topeka) slack 5 178.974961 x(seattle,chicago)supply(san-diego) slack --- LP status (1): optimal. --- Cplex Time: 0.01sec (det. 0.01 ticks) Optimal solution found Objective: 178.974961 [...] We can now load the GDX file containing the output data:\nresults = gt.Container(os.path.join(working_dir, model_name + \u0026#34;.gdx\u0026#34;)) We are interested in the variable x, which contains the quantities to ship from each production plant to each market. The records are returned as a pandas dataframe, so we can start working with them straight away. Note that we use a \u0026ldquo;deep copy\u0026rdquo; of the dataframe, because we will make some small modifications to the structure further down. Without deep copy, x would be a “live” reference to the data inside the container, and modifications of the data would invalidate the container.\nx = results.data[\u0026#39;x\u0026#39;].records.copy(deep=True) x Plant Market level marginal lower upper scale 0 seattle new-york 325.0 0.000000 0.0 inf 1.0 1 seattle chicago 175.0 0.000000 0.0 inf 1.0 2 seattle topeka 0.0 0.015911 0.0 inf 1.0 3 seattle denver 0.0 0.016637 0.0 inf 1.0 4 san-diego new-york 0.0 0.002480 0.0 inf 1.0 5 san-diego chicago 125.0 0.000000 0.0 inf 1.0 6 san-diego topeka 275.0 0.000000 0.0 inf 1.0 7 san-diego denver 400.0 0.000000 0.0 inf 1.0 We will rename the i_0 and j_1 columns to something more friendly.\nx.rename(columns = {\u0026#39;i_0\u0026#39;: \u0026#39;Plant\u0026#39;,\u0026#39;j_1\u0026#39;:\u0026#39;Market\u0026#39;}, inplace=True) Now we have all the data we need in Python. We can now go ahead and analyse the data in any way we like, using the huge range of available Python packages. Below, we use Cartopy to plot amounts shipped between plants and markets, with thicker lines denoting a larger amount of goods to transport.\nimport matplotlib.pyplot as plt import cartopy.crs as ccrs import cartopy.feature as cfeature fig = plt.figure(figsize=(15, 10)) ax = fig.add_subplot(1, 1, 1, projection=ccrs.Robinson()) ax.coastlines() ax.set_extent([-125, -66.5, 20, 50], crs=ccrs.Geodetic()) ax.add_feature(cfeature.LAND) ax.add_feature(cfeature.OCEAN) ax.add_feature(cfeature.STATES) for index, row in x.iterrows(): p_loc = list(plant_locations[row.Plant]) m_loc = list(market_locations[row.Market]) w = row.level / 50 ax.plot([p_loc[1],m_loc[1]],[p_loc[0],m_loc[0]], transform=ccrs.PlateCarree(), linewidth=w) ax.plot(p_loc[1], p_loc[0], marker=\u0026#39;o\u0026#39;, color=\u0026#39;red\u0026#39;, markersize=12, transform=ccrs.PlateCarree()) ax.plot(m_loc[1], m_loc[0], marker=\u0026#39;o\u0026#39;, color=\u0026#39;red\u0026#39;, markersize=12, transform=ccrs.PlateCarree()) ax.text(p_loc[1] -2, p_loc[0] - 2, row.Plant, transform=ccrs.Geodetic(), bbox=dict(facecolor=\u0026#39;sandybrown\u0026#39;, boxstyle=\u0026#39;round\u0026#39;)) ax.text(m_loc[1] +1, m_loc[0] + 1, row.Market, transform=ccrs.Geodetic(), bbox=dict(facecolor=\u0026#39;#60b0f4\u0026#39;, boxstyle=\u0026#39;round\u0026#39;)) plt.show() Below is the listing of the geo.py module with the helper functions that calculate distances between cities.\nfrom geopy.geocoders import Nominatim from geopy.distance import geodesic import time def get_locations(cities): \u0026#39;\u0026#39;\u0026#39;Retrieve geo location from OpenStreetMap data\u0026#39;\u0026#39;\u0026#39; # Create a new client to resolve addresses to locations geo = Nominatim(user_agent=\u0026#34;gamstransfer_example\u0026#34;) locations = {} for city in cities: time.sleep(1) # Limit the number of requests to the server loc = geo.geocode(city) locations[city] = (loc.latitude, loc.longitude) return locations def calculate_distances(sources,destinations): \u0026#39;\u0026#39;\u0026#39; Calculate the distances for all city pairs\u0026#39;\u0026#39;\u0026#39; distances = [] for source,sourceLoc in sources.items(): for dest, destLoc in destinations.items(): distances.append((source,dest,0.001 * geodesic((sourceLoc[0],sourceLoc[1]),(destLoc[0],destLoc[1])).miles)) return distances ","excerpt":"Our new data-centric GAMS Transfer API allows convenient exchange of large data sets between GAMS and Matlab or Python, and uses the native data structures of those platforms. This Python example shows how easy it is to use GAMS Transfer.","ref":"/blog/2021/11/introducing-gams-transfer/","title":"Introducing: GAMS Transfer"},{"body":"","excerpt":"","ref":"/authors/rkuhlmann/","title":"Renke Kuhlmann"},{"body":"","excerpt":"","ref":"/authors/gfranco/","title":"George Franco"},{"body":"This year, George, Adam, and Steve traveled to Anaheim for the INFORMS Annual Meeting 2021. This was the first INFORMS conference held in person since the COVID-19 pandemic began. Steve and Adam gave numerous talks and tech tutorials throughout the week on topics such as GAMS Transfer and GAMS Engine.\nWhile attendance was lower than in years past, there was a steady stream of both existing and potential users who spoke with us at the GAMS booth in the exhibit hall. We greatly enjoyed meeting new acquaintances and connecting with old friends at the event. As always the conference featured a wide variety of interesting talks and workshops. We are already looking forward to attending INFORMS next year!\nOur Technical Tutorial Turning Models Into Applications – GAMS Engine and GAMS Transfer Presented by: Adam Christensen \u0026amp; Steven Dirkse\nThe right tools help you deploy your GAMS model and maximize the impact of your decision support application. GAMS Engine is a powerful tool for solving GAMS models, either on-prem or in the cloud. Engine acts as a broker between applications or users with GAMS models to solve and the computational resources used for this task. Central to Engine is a modern REST API that provides an interface to a scalable Kubernetes-based system of services, providing API, database, queue, and a configurable number of GAMS workers. GAMS Transfer is an API (available in Python, Matlab, and soon R) that makes moving data between GAMS and your computational environment fast and easy. By leveraging open source data science tools such as Pandas/Numpy, GAMS Transfer is able to take advantage of a suite of useful (and platform independent) I/O tools to deposit data into GDX or withdraw GDX results to a number of data endpoints (i.e., visualizations, databases, etc.).\nIn case you would like to get a more in depth view, just check our playlist on our Youtube-channel, where you can find the on-demand videos of this talk! Below you find the slides of our technical tutorial:\nName: Size / byte: GAMS_Engine_INFORMS2021.pdf 535067 GAMS_Transfer_INFORMS2021.pdf 371337 ","excerpt":"GAMS was happy to attend the first in-person INFORMS conference since COVID-19 pandemic. GAMS held a technical tutorial about how to turn your model into applications while using GAMS Engine.","ref":"/blog/2021/11/informs-annual-meeting-in-anaheim/","title":"INFORMS Annual Meeting in Anaheim"},{"body":" We would like to thank Dr. Evangelos Panos from the Paul Scherrer Institute, Switzerland for providing the example GAMS model described below, and for his contributions to this article.\nBackground With GAMS 37, we now support the generation of model instances with more than 2³¹ non-zeros. Only a handful of our customers use models of this size, but one of the consequences of Big Data is Big Models. We therefore expect massive models to occur more often in the future, and perhaps even become commonplace in some application areas. If you are brave enough, you can test the old limit for yourself by running the following simple GAMS code on a machine with a lot of RAM (we suggest around 200GB):\nset i / 1*46341 /; alias (i,j); variable x(j), z; equation e(i), obj; obj.. z =e= 0; e(i).. sum(j, x(j)) =e= 0; model m /all/; m.limrow=0; m.limcol=0; option solvelink=0; solve m min z us lp; With GAMS \u0026lt; 37, this will cause a segmentation fault in the solver link after 15-20 minutes, depending on the speed of the machine. The reason for this is that much of the GAMS codebase was developed at a time when 32-bit was the norm, so counters, offsets, and indices were typically signed 32-bit values. With the shift from a 32-bit to a 64-bit paradigm (see also this blog article ), these limits are relaxed and we can work with much larger models. The number of non-zeros in the constraints is the natural place to start.\nThe Update Changing the counter variables to 64-bit integers had quite a few ripple effects throughout the code base. Fortunately, the internal data structures used by GAMS did not need to be adjusted, so the memory footprint of the GAMS process is essentially unchanged as a result of this update. Our team has carefully worked through all of the potential consequences and pitfalls - you can probably imagine that running software tests that generate TB-sized models requires some patience and is not much fun :)\nThe effort has paid off, and GAMS 37 now offers a maximum number of non-zeros in generated model instances of 2^63 bit, which should be sufficient in the future. The solver links GAMS/Cplex, GAMS/Gurobi, GAMS/Xpress, and GAMS/ODHCplex have already been updated to support the new limit, and GAMS/Mosek will follow soon. Currently LPs and MIPs (without SOS constraints) are supported.\nA Real World Example from Energy Systems Research Energy systems analysis research has evolved from simple accounting tools in the 1970s to sophisticated energy systems models in the 2000s to respond to the increased complexity of energy and climate change mitigation policies. It is well embedded in energy strategy deliberations and has helped inform the climate and energy strategy dialogue in the past decade. However, the energy systems increasingly become more complex due to megatrends (e.g. globalisation and digitalisation), decentralisation and sector coupling, combined with ambitious climate change targets and increasing reliance on weather-driven energy supply.\nWhy is it important to the energy systems community to be able to solve large models Policymakers and stakeholders are in high need of support in making informed decisions about investments and policy instruments that would lead to a future energy system that secures the supply and provides reliable and affordable services with a low environmental footprint. This necessitates modelling the future energy system at fine spatial and temporal scales; and with increased sectoral and technical detail.\nFor example, higher spatial and temporal details allow modelling results to reflect local conditions and constraints when calculating energy demand flexibility, storage, interconnection or other flexibility needed to accommodate increasing renewable penetration. The higher spatial resolution also allows assessing best-fit local decarbonisation solutions and strategies at a sub-national level, such as states, regions and municipalities. Further, the emergence of local energy markets requires increased technical and sectoral detail in energy systems analysis.\nTo accommodate these challenges in providing informed decision making support to policymakers and stakeholders, and supported by the availability of Big Data, there has been an exponential increase in the last years in the size of the matrices of the energy systems models and the number of non-zero coefficients in them. With the increasing importance of higher spatial, temporal, sectoral and technical resolution in energy systems analysis, it is expected that the exponential growth of model matrix sizes and the number of non-zeros in them will continue in the coming years.\nEUSTEM model of the Paul Scherrer Institute In response to the need for higher detail in energy systems analysis, the large-scale EUSTEM model was developed at the Paul Scherrer Institute (PSI) in Switzerland, based on the TIMES modelling framework of the International Energy Agency Energy Technology Systems Analysis Program (IEA-ETSAP) [1]. The model represents the energy systems of the European countries and combines a long-term horizon (2050+) for investment decisions, a high intrannual resolution to capture short-term operating constraints, and a high spatial resolution for local resource constraints.\nIn its first version of 2016, the model included 11 European regions with 288 timeslices in a year (typical operating hours) and 8 time periods from 2010 to 2050.\nIn 2021, five years later than its development, the model has been expanded to represent up to 30 European countries and to include up to 8760 timeslices, to assess questions from policymakers and stakeholders related to future storage needs, electricity load profiles (accounting for electrification of transport), deployment of renewable energy sources and their local constraints, digitalization, demand-side flexibility, and more.\nThe development of EUSTEM is one example of the increasing need in the energy systems community to perform the analysis at high spatial and temporal resolution. However, until GAMS version 37, it was impossible to solve EUSTEM with more than 2016 timeslices per year and 11 regions. For example, trying to solve the model with 4032 timeslices and 11 regions, GAMS aborted with an error caused by the integer overflow due to the large number of non-zeros in the model matrix:\nTrade-offs and alternative approaches in solving EUSTEM Hence, there was a trade-off between the resolution in spatial and temporal analysis with EUSTEM: either a few regions with a high number of timeslices or many regions with fewer timeslices. The alternative was to couple EUSTEM with other simulation tools, operating at high spatial or temporal details, to provide the insights required by the analysis and for targeted years only.\nSuch model coupling approaches are commonly used in the energy systems modelling community to cope with the challenge of not being able to solve very large models. However, different model coupling techniques can lead to different solutions. The way the model coupling is designed and performed is critical to avoid losing consistency in the analysis. As a result, the model coupling is associated with an increased effort to ensure computational efficiency and conceptual robustness.\nWith GAMS 37, it is possible for the first time to solve EUSTEM with 4032 timeslices, get new insights because of the higher temporal resolution, and address questions which so far were only partially tackled with the model. Below are some statistics from the run:\nModel Generation\n~4.5 hours 171,349,732 rows, 116,118,348 columns, 2,680,768,359 non-zeros ~238 GB of memory used Solve\nCplex (Barrier, 16 threads, Crossover disabled) ~ 65 hours / 316 iterations Hardware\n4 Sockets 88 physical cores (but only 16 threads were used) 2 TB of Memory Peak Memory consumption: 821.70 GB\nAdvancing energy systems modelling The ability to handle a huge number of non-zeros in the model matrix lifts a major obstacle in model-based energy systems analysis. It enables new designs and approaches, which also use Big Data, without trade-offs in model size.\nThe latest development from GAMS makes it possible to advance energy systems modelling and ensure that it remains a state-of-the-art tool in informing national energy policy.\nReferences [1] Pattupara, R. (2016). Long Term Evolution of the Swiss Electricity System under a European Electricity Market, Ph.D. Thesis, ETH Zurich, Nr. 23234. DOI:10.3929/ethz-a-010635090 ","excerpt":"With GAMS 37, we now support the generation of model instances with more than 2³¹ non-zeros. We expect huge models to become more important in the future.","ref":"/blog/2021/11/model-instances-with-more-than-2-non-zeros-impact-on-energy-systems-research/","title":"Model Instances with more than 2³¹ Non-Zeros - Impact on Energy Systems Research"},{"body":"Background Professional software development relies heavily on test automation and continuous integration (CI) to make sure that mistakes are caught early in the development process. Jenkins was the first open source automation server to see massive uptake, and is one of the tools used every day at GAMS. With GitLab CI/CD and GitHub Actions, the two major repository platforms have also added their own continuous integration products in recent years. For developers of GAMS models, the CI capabilities of Jenkins, GitHub and GitLab have not been easily usable, because it was difficult for the build processes to communicate with a GAMS installation that could be used to run the test code.\nFortunately, this has now changed with our latest product GAMS Engine. For those who do not know it yet, GAMS Engine provides a REST API that can be used to submit and run GAMS jobs to a central location.\nBelow we outline how any GAMS model developer on GitHub can easily use Engine to run automated tests for their models. The same principles apply to GitLab and Jenkins.\nA GitHub Example Prerequisites A GitHub repository with your GAMS model code. Credentials to access a GAMS Engine instance. Open source developers please contact us at support@gams.com to get free access to one of our Engine instances.\nHow it\u0026rsquo;s done The concept behind \u0026ldquo;GitHub Actions\u0026rdquo; is straightforward. In a nutshell:\nAny event that happens in your code repository (e.g. pushing a new commit), can trigger a workflow A workflow contains one or more jobs, which are executed on compute resources called runners. Everything that happens in a job runs sequentially on the same runner, and multiple jobs by default run concurrently on multiple runners. You can choose runners based on Linux, Windows, or macOS. There are quite generous free quotas of runner time offered by GitHub, which should be sufficient for most projects (see https://github.com/features/actions#pricing-details) . Each job contains one or more steps. Typical steps can be \u0026ldquo;checkout the latest version from the repository to the runner\u0026rdquo;, \u0026ldquo;compile the source\u0026rdquo;, \u0026ldquo;deploy something to a server\u0026rdquo;, and so on. Each step calls actions or executes shell commands. Actions are the things that actually DO something. To demonstrate how Engine can be used for running automated tests for GAMS models, Freddy from our development team created two GitHub actions at https://github.com/GAMS-dev/actions .\nThe \u0026ldquo;run-job\u0026rdquo; action allows you to run a model on a GAMS Engine instance. The \u0026ldquo;update-model\u0026rdquo; action allows you to register or update a GAMS model on a GAMS Engine instance. This is not relevant to automated testing, but useful for controlling model deployment on GAMS Engine, and outside of the scope of this article. We will cover model deployment in another article. How do you use GitHub actions? To define a workflow, you have to create a YAML file in .github/workflows/ inside your source repository. The following workflow runs when a new commit is pushed to the repository. It checks out the latest commit to a new runner, prepares the model for submission to GAMS Engine, and then runs the model in compile-only mode on Engine:\n# The name of the workflow name: Test Demo # Which event should trigger the workflow? on: [push] # Job definitions jobs: Test-Model-On-Engine: # Choose the operating system of the runner runs-on: ubuntu-latest # Define all the steps for the job steps: # first step: checkout repository to runner - uses: actions/checkout@v2 # second step: create a zip file - run: | zip -r model.zip PATH1 PATH2 # third step (with name): use run-job action # and pass parameters to submit job to GAMS/Engine instance - name: Submit demo job uses: GAMS-dev/actions/run-job@v1 with: url: ${{ secrets.ENGINE_URL }} namespace: ${{ secrets.ENGINE_NS }} username: ${{ secrets.ENGINE_USER }} password: ${{ secrets.ENGINE_PASSWORD }} run: \u0026#39;model.gms\u0026#39; arguments: \u0026#39;a=c,idir1=PATH1,idir2=PATH2\u0026#39; model_data: \u0026#39;${{ github.workspace }}/model.zip\u0026#39; A few things should be explained regarding the arguments that can be passed to the \u0026lsquo;run-job\u0026rsquo; action:\nYou can see that we use a few variables of the form ${{ secrets.xyz }}. These variables can be stored in encrypted form in the model repository (https://docs.github.com/en/actions/security-guides/encrypted-secrets) . You will need to set these secrets to point to the URL of your Engine instance of choice, and provide the USER and PASSWORD as well. The ${{ github.workspace }} variable used in the last line contains the path inside the runner where your code is checked out by the \u0026lsquo;checkout\u0026rsquo; action in the very first step of the job. The argument a=c causes the model to just be compiled by the GAMS worker, but not executed. This is sufficient for catching syntax errors, and makes sure the workflow will finish quickly. Once you add this workflow file to your repository and modify it to suit your model, each push to the repository will trigger a run of the model on Engine. If the model fails to compile, the run will fail and a notification email will be sent.\nHow do actions work? Here is an explanation for those who would like to understand what happens when one of our GitHub action runs:\nOur GAMS-dev/actions repository contains separate metadata YAML files for each action, which configure inputs and outputs of the action (e.g. https://github.com/GAMS-dev/actions/blob/main/run-job/action.yml ). The metadata definitions are read by the \u0026ldquo;actions toolkit\u0026rdquo; provided by GitHub (https://docs.github.com/en/actions/creating-actions/creating-a-javascript-action ), which creates the scaffolding of an \u0026lsquo;index.js\u0026rsquo; file. This scaffolding had to be completed to implement the actual logic, i.e. using the Engine REST API to submit and schedule the model, and receive the results. The JavaScript code you can see in our repository was compiled into a single, self-contained file including all dependencies using ncc (https://github.com/vercel/ncc ). ","excerpt":"Learn how to use GitHub Actions with GAMS Engine to run automated tests for your optimization models.","ref":"/blog/2021/10/automated-gams-model-testing-with-gams-engine-and-github-actions/","title":"Automated GAMS model testing with GAMS Engine and GitHub Actions"},{"body":"","excerpt":"","ref":"/categories/gams/engine/","title":"GAMS/Engine"},{"body":"The 2021 Nobel prize in physics has been awarded to Syukuro Manabe and Klaus Hasselmann for their groundbreaking work in understanding the earth\u0026rsquo;s climate and how humanity influences it, and to Giorgio Parisi for his work on disordered materials and random processes. We would like to congratulate the awardees for the recognition of their contributions.\nWith it\u0026rsquo;s choice, the committee has once again recognized the importance of modeling to understand and combat climate change.\nIn many areas, from medicine to astronomy and climate change, models to understand the physical world are \u0026ndash; at their core \u0026ndash; systems of coupled differential equations. These models are used to calculate the evolution of a complex system with respect to time. They can often reproduce tipping points or other emergent properties of the studied systems, which are outside of what people can grasp intuitively.\nIn contrast to physical models, in economics research other modeling techniques are prevalent, mostly originating in the field of operations research, where models are formulated as systems of equations that are solved to find some optimum or equilibrium. These types of models can give answers in areas such as economic productivity, pricing of goods and services, trade balances and so forth. Algebraic modelling languages such as GAMS have a strong tradition in these economic modeling areas, and make the power of optimization accessible to domain experts who typically are not optimization experts.\nBack in 2007, the International Panel on Climate Change (IPCC) was awarded the Nobel Peace Prize together with Al Gore. One of the great accomplishments of the IPCC scientists had been the recognition of the need for dialogue between the modelers of the physical world and the economic modelers: We need information about societal and economic consequences of inaction, and also information about the cost of action when combating climate change.\nHere at GAMS we are fortunate to have a long standing relationship with one of the members of the IPCC, Prof Bruce McCarl from Texas A\u0026amp;M. Prof McCarl uses GAMS to model the impact of climate change on farming . He still actively contributes to the continuous improvement of GAMS with his suggestions, and is also the author of the Bruce McCarl newsletter on GAMS.\nAfter the Nobel Peace Prize in 2007, in 2018 the committee again chose to award the Nobel Prize to scientists working on the impact of climate change, this time in the category of Economic Sciences.\nNext to Paul M. Romer, who received his medal for work on how technological change affects economic growth, William D. Nordhaus was awarded his prize for work on integrating climate change into long-run macroeconomic analysis. He had developed the \u0026ldquo;Dynamic Integrated Climate-Economy \u0026rdquo; (DICE) model (implemented in GAMS) for his work. This model is typical of a class of models called \u0026ldquo;integrated assessment models\u0026rdquo; (IAMs), which link economic modeling with climate modeling, and are now established tools in the analysis of climate change impacts.\nIn his prize lecture, Prof Nordhaus explains the results of his research:\n(The video will open on youtube.com) We are very proud to be the maker of one of the (many) different computational tools used by the modeling community and celebrate this nobel prize accordingly.\nEn applåd för modellering!\n","excerpt":"The 2021 Nobel Prize in Physics has been awarded to Syukuro Manabe, Klaus Hasselmann, and Giorgio Parisi.","ref":"/blog/2021/10/celebrating-the-recognition-of-modeling-in-climate-change-research/","title":"Celebrating the Recognition of Modeling in Climate Change Research"},{"body":" Area: Economic Modeling and Policy\nProblem class: MCP\nModeling Transportation Carbon Intensity Targets for the EU with GAMS Background The International Council on Clean Transportation (ICCT) is a non-profit organization, helping governments and policy makers make the right decisions to reduce air pollution and reduce fuel consumption, across all modes of transport. Given the complexity of the transportation sector, policy makers can only make informed decisions, if the policy options can be simulated using a consistent set of rules using complex, integrated models of the whole sector. Algebraic modeling languages (AMLs) such as GAMS have proven to be useful tools for developing such models. For a project on decarbonization of the transport sector, initiated by the European Commission, the ICCT asked GAMS to develop, test, and run a partial equilibrium model of the transportation sector in the EU, including light duty vehicles, heavy duty vehicles and a representation of the aviation sector. The purpose of the model was to test several policy scenarios that included overlapping green-house gas (GHG) reduction targets, eligibility caps, and other preferential treatments. Within GAMS we can apply all of these constraint sets simultaneously and solve the model to gauge the market\u0026rsquo;s response to these policies. The results of this work feeds into the EU commission's 'Green Deal' plan on climate change, with the aim of reducing greenhouse gas emissions by 55% by 2030, and becoming climate neutral by 2050. The following text summarizes some aspects of the study in a condensed format. The full report can be accessed on the ICCT website .\nConsidered Policy Scenarios A total of 10 policy scenarios are considered, which were developed in collaboration with the researchers at the ICCT. Each scenario represents a combination of:\nGreenhouse gas reduction targets or renewable energy mandates Caps on food-and-feed based biofuels Advanced biofuel mandates Renewable fuels of non-biological origin (RFNBOs) Sustainable aviation fuel (SAF) mandates Aviation e-fuel mandates Expected electric vehicle annual growth rate Caps on use of certain intermediate crops (e.g. soy or maize) for biofuel production The scenario parameters are summarized below:\nTable 1. Input scenarios The Model The model was formulated as a static partial equilibrium model, using multiple agents. Each of these agents acts as a cost minimizer, while complying with the set of proposed policy targets:\nConsumer agents make purchasing decisions for Light Duty Vehicles or Heavy duty vehicles with different engine technologies (gasoline, diesel, electric, hydrogen fuel, or compressed natural gas).\nAn aviation consumer agent makes purchasing decisions about the fuel blend to purchase.\nA blender agent, responsible for providing fuel to the consumer agents, meeting the policy requirements. The supply of blendstocks to the blender is assumed to fit an iso-elastic supply curve.\nAbout Extended Mathematical Programming (EMP)\nEquilibrium problems can be tedious to implement, but GAMS \u0026ndash; as the only algebraic modeling language \u0026ndash; offers the \u0026ldquo;Extended Mathematical Programming\u0026rdquo; (EMP) extension with special support for these problems. EMP makes it possible to implement the individual agents as separate subproblems, which are then automatically reformulated into a format that can be solved efficiently by commercial solvers. The true power of this reformulation technology is that it enables rapid model re-development without concern for necessary staff time associated with the reformulation-debug-verify cycle. This cycle of re-development could quickly become overwhelming if late changes were necessary, something that is inherent in a project that aims to inform a constantly evolving policy conversation.\nResults Figure 1 summarizes the types of fuels used to meet the policy requirements for each scenario. Some of the differences between the scenarios are as expected, for example a lower total amount of renewable energy when the GHG target is reduced (Scenario 2) and no food-based biofuels when the food-based cap is set to 0% (Scenarios 3, 4, and 10). One striking result is the large amount of intermediate crop biofuel in most scenarios in which it is exempt from the food-based biofuel cap. When the policy becomes more ambitious, for example increasing the energy mandate level from Scenario 8 to 9, intermediate crop biofuel fills in most of the total increased renewable fuel demand. In particular, we find a large increase in soy hydrotreated vegetable oil (HVO). Simply reducing the target level, for example from Scenario 1 to 2, sharply reduces the amount of intermediate crop biofuel used. We find that intermediate crops are the cheapest compliance option to meet a GHG target or a renewable energy mandate, once the sub-mandates and caps have been complied with.\nFig 1. Energy consumption by fuel category by policy scenario Problematic is the fact that the majority of intermediate crops globally are major commodity crops and their use in biofuel can be expected to cause indirect land use change (ILUC), just as with food-based biofuels. When we consider ILUC emissions, the very high total GHG emissions from intermediate crop soy biofuel significantly detract from the GHG savings of the policy as a whole. We can see this in Table 2, which shows the total GHG savings for each scenario, as well as the average cost of carbon abatement, the GHG credit price, and the total share of renewable energy in the road and aviation sectors.\nTable 2. Environmental summary statistics One important finding of this study is that a GHG target results in much greater GHG savings than a renewable energy mandate. Scenario 8, representing a 26% renewable energy mandate leads to a similar total amount of renewable fuel as Scenario 1, but delivers only around one-third the overall GHG savings. Consequently, the carbon abatement cost of Scenario 8 is around three times as high as that of Scenario 1. A GHG target also appears to be a much more cost effective means to achieve climate mitigation than a renewable energy mandate.\nRenewable fuel policy is complex, and the impacts of policy changes are not always intuitive. Quantitative modeling, as demonstrated here, can be a useful tool in objectively analyzing a broad set of effects from changes in guidelines, and allows policy makers to make informed decisions.\nThe full report is available at https://theicct.org/publications/transport-carbon-intensity-targets-eu-aug2021 .\nAbout the ICCT The International Council on Clean Transportation is an independent nonprofit organization founded to provide first-rate, unbiased research and technical and scientific analysis to environmental regulators. Their mission is to improve the environmental performance and energy efficiency of road, marine, and air transportation, in order to benefit public health and mitigate climate change.\n","excerpt":"The ICCT asked GAMS to develop, test. and run a partial equilibrium model of the transportation sector in the EU. Renewable fuel policy is complex, and the impacts of policy changes are not always intuitive. Quantitative modeling, as demonstrated here, can be a useful tool in objectively analyzing a broad set of effects from changes in guidelines, and allows policy makers to make informed decisions.","ref":"/stories/icct/","title":"Modeling Transportation Carbon Intensity Targets for the EU with GAMS"},{"body":"The joint annual conference of the Operations Research Societies of Switzerland (SVOR), Germany (GOR e.V.), and Austria (ÖGOR) was held online from August 31 to September 3 this year.\nThe organizers have done a fantastic job of creating a conference format that was interesting and engaging. Apart from the scientific program there was also a virtual sightseeing tour of Bern.\nThe parallel sessions were held in virtual rooms , named after peaks of the Bernese mountains. Extra points for those who managed to find the correct room just by looking at the pictures!\nGAMS took part in the conference with the following presentations:\nGAMS Engine - A New System To Solve Models On Centralized Compute Resources presented by Stefan Mann, Frederik Proske, and Hamdi Burak Usul\n(The video will open on youtube.com) Model deployment in GAMS presented by Frederik Proske, and Robin Schuchmann\n(The video will open on youtube.com) ","excerpt":"GAMS staff gave two presentations at the joint annual Operations Research conference of the Swiss, Austrian, and German OR societies. Videos of the presentations are available here.","ref":"/blog/2021/09/or2021-virtual-conference-recap/","title":"OR2021 Virtual Conference Recap"},{"body":" It is hard to imagine academic term scheduling, term-end exam scheduling, and room scheduling here at the United States Military Academy without the robust tools and systems that GAMS has provided. When we reached out to GAMS (in late 2015) to discuss review and enhancement of the academic term and term-end exam scheduling processes that had been in place since 2000, they suggested that we consider taking a data-driven approach. This has allowed us to respond to required (and desired) changes in a timely and efficient manner. When we added room scheduling in 2018, the same approach was taken. This allowed us to schedule rooms in the Covid-19 environment by simply changing room capacities and moving out. Collaborating with GAMS is always a pleasure and the support is top-notch. Five stars!!!\nCloud Migration of the USMA Schedulers 2021 The United States Military Academy (USMA) has been using scheduling algorithms based on and developed by GAMS for many years. The following three GAMS based scheduling applications are in operation at USMA.\nThe Data-Driven Room Scheduler (DDRS), introduced in 2018 The Term End Exam Scheduler (TEE), introduced in 2017 The Data-Driven [Term] Scheduler (DDS), introduced in 2016 In February 2021, GAMS introduced GAMS Engine , a technology to run GAMS jobs in cloud environments. In June 2021, USMA and GAMS agreed to migrate all the USMA Scheduling applications to the cloud. Thanks to the simplicity of GAMS Engine\u0026rsquo;s REST API, the migration from an in house solution to a seamless integration of GAMS Engine into USMA\u0026rsquo;s cadet administration system went smoothly.\nFig 1: GAMS Engine Infrastructure used by USMAThe USMA scheduling algorithms run via GAMS Engine SaaS which is hosted on the AWS cloud infrastructure. The various schedulers are registered with Engine, which means that only input and output data is passed back and forth between the client machines and Engine SaaS.\nThanks to the horizontal scalability of GAMS Engine SaaS, it is now possible to evaluate many scenarios in parallel, which drastically reduces the overall time of the scheduling process.\nRoom Scheduling at USMA 2018 At the United States Military Academy, scheduling rooms for courses can be a complex and time-consuming process, especially during events such as construction work where several rooms may become unavailable. In 2018, USMA approached GAMS to help automate this process with an optimization engine, resulting in the development of a customized Room Scheduler software.\nThe Room Scheduler takes in all course sections and their corresponding hours, and assigns suitable rooms to each course section. It considers several business rules, including capacity constraints, utilization balancing, room features, same-room-same-course requests, fixed assignments, and several soft business rules. By using these rules, the Room Scheduler finds an optimal assignment of course sections to rooms.\nThe room scheduling algorithm is an iterative procedure that aims to approach an optimal final room schedule in small steps. In addition, the Room Scheduler provides a Continuity of Operations (COOP) module that allows a room schedule to be repaired with minimal changes when some rooms become unavailable on short notice or when room requirements change due to unforeseen events, such as the sudden consideration of social distancing during the Covid 19 pandemic.\nTerm End Exam Scheduling at USMA 2017 GAMS experts have successfully implemented a state-of-the-art software solution for term end exam (TEE) scheduling at the United States Military Academy (USMA). This project showcases the competence of our consulting services in developing customized optimization solutions to solve complex scheduling problems.\nThe TEE scheduling at USMA is a challenging problem with multiple hard and soft requirements. The developed software satisfies all the hard requirements, while optimizing the soft requirements in the best possible way. The hard requirements include no hour conflicts, respecting hard capacity limits, scheduling exclusive courses in different periods, grouping exams of inclusive courses by type, fixing exams to given periods, limiting the number of makeups per course, and respecting finishing periods. The soft requirements include limiting the number of consecutive exams and exams per day for each cadet, moving exams out of certain periods, and accommodating individual off-periods.\nTo solve this sophisticated multi-objective optimization problem, our solution approach employs a polylithic framework that includes multiple problem-specific preprocessing steps and a powerful fix-and-optimize algorithm that can be parameterized to optimize the soft requirements in various ways. The result is a TEE schedule that meets all the hard requirements and optimizes the soft requirements according to the priorities set by the USMA.\nTerm Scheduling at USMA 2016 At the United States Military Academy (USMA) in West Point, the academic program is uniquely designed around the requirement that all students must graduate in four years, a total of eight academic semesters or terms (8TAP = eight term academic program). Adding to the unique character of USMA is the fact that each student’s daily activities are a carefully regimented balance of academic, military, and physical requirements. The ~4,500 enrolled cadets compile their individual 8TAPs which makes the scheduling particularly challenging.\nFor the term scheduling, a sophisticated decision support system that combines decomposition methods, heuristics, multi-objective optimization and state-of-the-art MIP solver technology has been implemented. The term scheduling system thereby supports a broad variety of business rules such as for example\nindividual free hours day to day balancing cohort scheduling (groups of cadets that should not be split for certain courses) next hour free requirements for particularly challenging PE courses enrollment balancing Crucially, the implemented solution is designed to support the scheduling workflow at USMA in the best possible way. While from a mathematical perspective, it is desirable to have a well defined problem and well defined data and then run the scheduler once, in practice scheduling is a multi week process that involves many interactive “negotiations” between the registrar, departments, and instructors concerning the course offering details like times, rooms, etc. Hence, in addition to “just” computing optimal schedules, the term scheduler also supports\nefficient computation of multiple alternative schedules such that the registrar can choose from a set of schedules fixing of partial schedules and a mechanism to control the trade-off between runtime and solution quality. Fig 2: Schematic view of the term scheduling algorithmCadets are partitioned in batches and then scheduling happens in two phases. In phase 1, individual schedules are optimized. In phase 2, the final enrollment is optimized subject to bounds that limit the deviation from the best individual schedules.\n","excerpt":"At the United States Military Academy in West Point, each student\u0026rsquo;s daily activities are a carefully regimented balance of academic, military, and physical requirements. The ~4,500 enrolled cadets compile their individual academic programs and it is necessary to ensure that each cadet can graduate in 8 semesters, which results in challenging scheduling problems. USMA relies on a highly customized decision support system to tackle tasks such as term scheduling, exam scheduling, and room scheduling. The schedulers are all based on GAMS and have been seamlessly integrated into USMA\u0026rsquo;s cadet administration system.","ref":"/consulting/usma/","title":"Scheduling at the United States Military Academy"},{"body":"GAMS MIRO receives an update - we have released MIRO 2.0! The increase of the version number from 1.3.2 to 2.0.0 suggests that this update must be a special one. When looking at the release notes , one new feature immediately catches the eye:\n“MIRO Server is officially released.”\nIt\u0026rsquo;s a very short sentence in the release notes (I checked: so far only \u0026ldquo;Bug fixes\u0026rdquo; and \u0026ldquo;Improve validation of Engine URL.\u0026rdquo; have been shorter), but it has a lot to offer! With GAMS MIRO Server, MIRO apps can now be moved to the cloud and accessed from any device with a modern web browser. This makes MIRO Server the most convenient solution for making MIRO applications in business, research, or teaching available to people all over the world.\nMIRO Server itself is open source software1 and will be updated as part of the regular MIRO releases from now on.\nRead more about MIRO Server here: www.gams.com/miro/server.html I would like to highlight two other aspects of the new MIRO version:\nPerformance improvements: The new MIRO is significantly faster than its predecessor. In particular, performance improvements have been achieved in terms of startup time and loading scenarios in scenario comparison mode. Below you can see the startup process of the same MIRO app with both versions (left: MIRO 1.3.2, right: MIRO 2.0):\nEven I, as a MIRO developer, was annoyed by how long it took to load a few scenarios into the interface for comparison. I\u0026rsquo;m all the happier that I can now load as many scenarios as I want for comparison and get started right away. This greatly improved user experience is enabled by \u0026ldquo;lazy loading\u0026rdquo; of data and charts. This means that only the data and charts you see on your screen will be loaded and displayed to you. The difference between the two versions becomes even clearer here (left: MIRO 1.3.2, right: MIRO 2.0):\nAny other cool features? Last but not least a nice gimmick for the app developers among you: Sooner or later you will probably play with the idea to implement your own graphics. Until now, this was always quite tedious because the R code had to be written completely detached from the app. With MIRO 2.0, you can write a custom renderer with a live preview based on real scenario data directly in Configuration Mode:\nWe hope you will enjoy MIRO 2.0! So what’s next?\nThe next thing we want to do is give the Hypercube Mode a major overhaul. Already in this release a few preparatory steps have been taken for what is to come. Stay tuned!\nUpdated on 2021-11-11: Fixed link to Hypercube Mode. The Hypercube Mode does not exist as a separate mode any more, but has been integrated into the base mode.\n1: For the model calculations MIRO Server makes use of GAMS Engine ","excerpt":"MIRO Server, performance improvements and a custom renderer editor - MIRO 2.0 brings a number of innovations!","ref":"/blog/2021/06/gams-miro-2.0-moving-to-the-cloud/","title":"GAMS MIRO 2.0 - Moving to the cloud"},{"body":"GAMS/ODHCPLEX is a solver from Optimization Direct Inc. that implements a set of heuristic methods (named ODHeuristics) for finding feasible solutions to Mixed Integer Programming (MIP and MIQCP) models that uses IBM CPLEX as its underlying solver engine. It is designed for large-scale models which a MIP solver would find intractable: either by it being unable to find feasible solutions at all or; more usually, by being unable to find feasible solutions of adequate quality in the time available to its user.\nIt is intended for users who are familiar with MIP modelling and have some knowledge of using the GAMS/CPLEX solver. GAMS/ODHCPLEX does not demand expert specialism in this field.\nIn this webinar, the main developer of ODH|CPLEX and president of Optimization Direct, Robert Ashford, gives an overview of the benefits of the solver and shows how to use it with GAMS, based on a nurse scheduling problem.\nThis webinar has been recorded in February 2021.\nSpeaker Robert Ashford\nOptimization Direct Inc.\nHarrington Park NJ 07640\nUSA\nDr. Ashford has a Master’s degree in Mathematics from Cambridge University and a Ph.D. from Warwick Business School. He has authored over 20 academic papers in Optimization.\nRobert co-founded Dash Optimization in 1984. Helped pioneer the development of new modelling and solution technologies – the first integrated development environment for optimization – in the forefront of technology development driving the size, complexity and scope of applications. Dash was sold in 2008 and Robert continued leading development within Fair Isaac until the Fall of 2010, Dr. Ashford subsequently, co-founded Optimization Direct in 2014.\n","excerpt":"\u003cp\u003e\u003cstrong\u003eGAMS/ODHCPLEX\u003c/strong\u003e is a solver from Optimization Direct Inc. that implements a set of heuristic methods (named ODHeuristics) for finding feasible solutions to Mixed Integer Programming (MIP and MIQCP) models that uses IBM CPLEX as its underlying solver engine. It is designed for large-scale models which a MIP solver would find intractable: either by it being unable to find feasible solutions at all or; more usually, by being unable to find feasible solutions of adequate quality in the time available to its user.\u003c/p\u003e","ref":"/webinars/odh-webinar/","title":"ODH|CPLEX Solver Webinar"},{"body":"Here we will have a list of past and upcoming GAMS webinars. This might also be a good spot to link youtube videos.\n","excerpt":"\u003ch3 id=\"here-we-will-have-a-list-of-past-and-upcoming-gams-webinars\"\u003eHere we will have a list of past and upcoming GAMS webinars.\u003c/h3\u003e\n\u003cp\u003eThis might also be a good spot to link youtube videos.\u003c/p\u003e","ref":"/webinars/","title":"Webinars"},{"body":" SHOT (Supporting Hyperplane Optimization Toolkit) is a deterministic solver for mixed-integer nonlinear programming problems (MINLPs).\nOriginally, SHOT was intended for convex MINLP problems only, but now also has functionality to solve nonconvex MINLP problems as a heuristic method without providing guarantees of global optimality. However, SHOT can solve certain nonconvex problem types to global optimality as well. For convex MINLP problems, SHOT is among the most efficient solvers (see https://doi.org/10.1007/s11081-018-9411-8 ) and is guaranteed to find the global optimal solution. SHOT can be run as fully open source with CBC and IPOPT as subsolvers, but the performance is significantly improved by using either CPLEX or GUROBI as a subsolver.\nSHOT is mainly developed by Andreas Lundell (Åbo Akademi University, Finland) and Jan Kronqvist (Imperial College London, UK).\nIn this webinar, the two developers explain the basics of their algorithm, and how to utilise SHOT from GAMS.\nThis webinar has been recorded in November 2020.\nSpeakers Andreas Lundell\nDepartment of Information Technologies\nDepartment of Mathematics\nÅbo Akademi University, Finland\nandreas.lundell@abo.fi Andreas is currently a researcher at the Department of Information Technologies at Åbo Akademi University (ÅAU) in Finland. His research is mainly focused on global optimization and mixed-integer nonlinear programming (MINLP).\nAndreas received his PhD in applied mathematics from ÅAU in 2009, and has since then been involved in several optimization-related research projects. One of these is the development of the SHOT solver, for which he is currently the project manager. Since 2013 he is an adjunct professor at ÅAU.\nJan Kronqvist\nFaculty of Engineering\nDepartment of Computing\nImperial College, London, UK\nj.kronqvist@imperial.ac.uk Jan has just finished a 2-year postdoc at Imperial College London, and will start as Assistant Professor in Optimization and Systems Theory at KTH Royal Institute of Technology in Sweden in May 2021. His research is focused on mixed-integer optimization, specifically in theory and algorithms for mixed-integer nonlinear programming (MINLP) and applications of mixed-integer optimization in machine learning and artificial intelligence.\nJan graduated in 2018 with honors from Åbo Akademi University in Finland, and was awarded best PhD thesis at the Faculty of Science and Engineering. After his PhD, he was awarded a Newton International Fellowship by the Royal Society in 2018, and a grant by the Foundations Post Doc Pool (given by the Swedish Cultural Foundation in Finland) to support his postdoc research. From 2019 to 2021, Jan worked as a postdoc at Imperial College London (Royal Society- Newton International Fellow).\n","excerpt":"\u003cimg src=\"shotLogo.png\" class=\"rounded mx-auto d-block mb-5\" height=\"125\"\u003e\n\u003cp\u003e\u003cstrong\u003eSHOT (Supporting Hyperplane Optimization Toolkit)\u003c/strong\u003e is a deterministic solver for mixed-integer nonlinear programming problems (MINLPs).\u003c/p\u003e\n\u003cp\u003eOriginally, SHOT was intended for convex MINLP problems only, but now also has functionality to solve nonconvex MINLP problems as a heuristic method without providing guarantees of global optimality. However, SHOT can solve certain nonconvex problem types to global optimality as well. For convex MINLP problems, SHOT is among the most efficient solvers (see \u003ca href=\"https://doi.org/10.1007/s11081-018-9411-8\" target=\"_blank\"\u003ehttps://doi.org/10.1007/s11081-018-9411-8\u003c/a\u003e\n) and is guaranteed to find the global optimal solution. SHOT can be run as fully open source with CBC and IPOPT as subsolvers, but the performance is significantly improved by using either CPLEX or GUROBI as a subsolver.\u003c/p\u003e","ref":"/webinars/shot-webinar/","title":"SHOT Webinar"},{"body":"A current call for papers for a special issue of the Operations Research Forum might be of interest to anyone who uses GAMS in the classroom.\nWe encourage all GAMS educators to consider contributing to the issue on \u0026ldquo;Model Development for the Classroom\u0026rdquo;.\nYou can download the official CFP here. ","excerpt":"The Operations Research Forum has issued a call for papers on the topic of \u0026ldquo;Model Development for the Classroom\u0026rdquo;.","ref":"/blog/2021/04/call-for-papers-model-development-in-the-classroom/","title":"Call for Papers - Model Development in the Classroom"},{"body":"","excerpt":"","ref":"/categories/gams-education/","title":"GAMS Education"},{"body":"The Center of Advanced Process Decision-making (CAPD) Annual Review Meeting usually takes place at Carnegie Mellon University. But due to the COVID-19 situation, the 2021 meeting was conducted remotely on the 9th and 10th of March 2021.\nThe first day kicked off with a series of presentations giving an overview of the ongoing research, followed by CAPD sponsor presentations and presentations on modeling systems. GAMS was very happy to be part of this event.\nSteve Dirkse (President, GAMS Development) gave a presentation about the newest developments at GAMS, highlighting GAMS MIRO and GAMS Engine .\nSteve\u0026rsquo;s comment on the meeting: The CAPD meeting was virtual, but the quality, breadth, and creativity of the research work presented there is still very real. Great to see larger concerns in society (e.g. a shift towards more sustainable practices related to energy and natural resources) reflected in the presentations and discussions at CAPD.\nYou can take a look on the presentation slides if you missed it. Name: Size / byte: GAMS_MIRO_Engine-CAPD2021.pdf 963795 ","excerpt":"The Center of Advanced Process Decision-making (CAPD) 2021 Annual Review Meeting was conducted remotely and GAMS was a part of it.","ref":"/blog/2021/03/capd-2021-virtual-annual-review-meeting/","title":"CAPD 2021 Virtual Annual Review Meeting"},{"body":"The Corona pandemic is still posing major challenges for society as a whole, and schools in particular. Since in schools a large number of students are in enclosed spaces, there is a fear that clusters of infection will form here.\nOne way to reduce the risk of infection is to use so-called cohort strategies, i.e., dividing classes into smaller groups that are taught separately.\nAs a simulation study has shown, cohort divisions in which contacts are not separated across cohorts whenever possible can reduce infections by over 70%. However, partitioning is not a simple problem. While it may still be possible to determine an optimal division by hand for small classes with few students, this approach has limitations in practice: There are usually cross-class courses, so individual classes cannot be considered in isolation.\nSince GAMS and MIRO are the perfect tools to model and solve this optimization problem and make the model accessible to a wide audience, we have developed the Cohort Divisor MIRO app, which can be accessed at:\nhttps://miro.gams.com/gallery/app_direct/cohortdivisor/ (English Version) http://miro.gams.com/gallery/app_direct/cohortdivisor_de/ (German Version) This Cohort Divisor MIRO app has been inspired by the excellent work Corona-Schuleinteilung (in German) by AG Opt at University of Kaiserslautern.\nIf you would like to use this tool, please read the extensive Readme, which is shown when you open the application. Furthermore, to make data entry as simple as possible, you can use the linked Excel sheets as a starting point.\nExcel workbook Cohort Assignment (in English) Excel workbook Gruppeneinteilung (in German) For those who would like to learn more about how this optimization problem was formulated in GAMS, the complete code is available on github: https://github.com/GAMS-dev/miro/tree/develop/src/model/cohortdivisor ","excerpt":"A new MIRO application for splitting student groups into cohorts","ref":"/blog/2021/03/a-cohort-divisor-application-for-schools/","title":"A Cohort Divisor Application for Schools"},{"body":"","excerpt":"","ref":"/categories/book/","title":"Book"},{"body":"Description Ignacio Grossmann publishes his new textbook about basic and advanced concepts of optimization theory and methods for process systems engineers. The topics covered in this book include continuous, discrete and logic optimization, like linear, nonlinear, mixed-integer and generalized disjunctive programming, optimization under uncertainty like stochastic programming and flexibility analysis, as well as decomposition techniques like the Lagrangean and Benders decomposition.\nSince only a basic understanding of calculus and linear algebra is required, he manages to enable an easy understanding and accessability of mathematical reasoning. Numerous examples illustrate the important concepts and algorithms. At the end of the chapters, he provides exercises involving theoretical derivations and small numerical problems, as well as in modeling systems like GAMS to enhance understanding and help the students put knowledge into practice. A full solutions manual accompanies the text, containing web links to modeling systems and models related to applications in PSE, this is an essential text for single-semester, graduate courses in process systems engineering in departments of chemical engineering.\nYou can have a closer look or order it here! About the author Ignacio E. Grossmann, Carnegie Mellon University, Pennsylvania\nIgnacio E. Grossmann is the R. R. Dean University Professor of Chemical Engineering at Carnegie Mellon University in Pennsylvania and Director of the Center for Advanced Process Decision-making. He is a member of the National Academy of Engineering and brings up to forty years of teaching experience.\n","excerpt":"This new textbook covers both basic and advanced concepts of optimization theory and methods for process systems engineers, combined with a spectrum of exercises and examples using modeling systems like GAMS, to help put knowledge into practice.","ref":"/blog/2021/03/the-new-book-by-ignacio-grossmann-advanced-optimization-for-process-systems-engineering/","title":"The new Book by Ignacio Grossmann \"Advanced Optimization for Process Systems Engineering\""},{"body":" Area: Energy Modeling\nProblem class: LP\nStadtwerke München Transitioned Their Large GAMS Based Energy Market Model to the Amazon Cloud Background Stadtwerke Munich (SWM) is one of the biggest municipal electricity providers in Germany, with an annual revenue for electricity of 2.8 billion EUR in 2019, corresponding to 37 TWh1. For comparison, the total net electricity produced in Germany each year is around 600 TWh2.\nSWM is strongly focused on their own renewable power generators (geothermal, hydropower, on- and offshore wind parks, photovoltaic, and biogas) and operate them both within Germany and in other European countries (Belgium, Croatia, Finland, France, Norway, Poland, and Sweden). The renewable energy contributes to a total of more than 3000 individual generators connected to the grid.\nFig 1. Overview of SWM participation in the European grid. Source: https://www.swm.de/english/company Because of their big investments in renewables SWM is highly interested in the long-term development of the European energy prices and the resulting economy of the assets.\nConsequently, SWM has developed and implemented a fundamental model in GAMS for the European energy market, focused on prices of electrical power, carbon and the value of renewables. The model calculates the most effective way to meet the demand of electrical power at all given points in time by taking all existing generators and all generators build by the model into account.\nThe long-term prices given by the fundamental model are being used for all investment decisions in generators (solar, onshore wind, offshore wind, gas fired CHP power plants and the geothermal sector). Furthermore, the long-term prices are basis for impairments and finally the corporate planning.\nCurrent Situation SWM has been operating their 2-part GAMS fundamental model for some years.\nThe first part of the model (the invest part) aims to calculate how the general electricity landscape will look in Europe up to the year 2050. This model is large, with approx. 70 million non-zero matrix elements.\nThe model inputs are 52 Excel-files with up to 60.000 data per sheet, e.g.:\nall existing European power plants, power demand and net transfer capacities, typical days for generation from renewable energy sources, fuel prices. The major outputs are\nthe available generators of each technology (renewable, conventional) the capacity and generation mix and the baseline market price for electricity for each year. Using the output from the invest model with yearly data, the second part (the dispatch model) can run in 20-30 parallel jobs for the different sets of assumptions. One job corresponds to one year in hourly resolution. SWM have been running this model on in-house hardware (192 cores, 1.5TB of RAM).\nWith the recent changes in the legal framework within the EU, the guaranteed price for renewable energy has been abolished, and renewable energy now underlies the same market fluctuations and risks as the other forms of energy. This change adds several degrees of freedom to the scenarios to be solved and answers to questions like: \u0026ldquo;What happens to the value of renewable energy if the gas price suddenly drops?\u0026rdquo; are needed. Beside of varying fuel prices a couple of legal regulations and challenges coming from national and European decarbonization targets must be considered.\nBecause of the fact that the resulting market prices are scenario-based under different assumptions and not a prognosis with a specific probability of occurrence, a lot of scenarios have to be calculated in order to figure out what the leverage on prices is if premises change.\nComplete simulation runs must be performed about 200 times per year and the results of each run should be available within 24 hours resulting in extremely high demand in computational power during those times. In between those peak periods, the demand is low. In practice this means that the required compute capacity cannot economically be met with in-house hardware any longer, and a solution using cloud computing appeared to be the best fit.\nTechnical Challenges The following technical challenges had to be addressed while developing the cloud solution:\nInvest model The invest model is memory bound and requires approximately 700 GB RAM for one scenario. It must be solved in one closed run without the possibility of partitioning and parallelization. Two cores are sufficient.\nDispatch model Running the dispatch part of the model requires automatic orchestration of 20-30 independent GAMS workers, feeding them with input data and collecting the results of each worker after the run completes. Here, the GAMS workers can be executed in parallel because one year is one run and the years can be calculated independently from each other. Each worker for the dispatch model requires approximately 30 GB RAM and two cores.\nThe Solution In 2020, the SWM have started a strategic cloud program. One stream of this program is the evaluation of cloud-based hyperscaling and for the proof-of-concept Amazon Web Services (AWS) have been chosen. The designed solution is a composition of different cloud services and state of the art infrastructure technology: The GAMS application is supplied as a docker image and the workflow to solve the GAMS model is orchestrated using serverless lambda functions (python).\nAt the heart of the solution is AWS Batch, which allows running hundreds or thousands of batch jobs and dynamically provisions the needed compute resources per job, based on the resource requirements of the submitted batch jobs. The result is a simple solution that is extensible and scalable and requires next to none hosting cost, while in idle. A strategic decision was to implement every cloud resource using Infastructure as Code (IaC) – for this SWM use the open source product terraform and therefore benefit from the typical software development lifecycle (git, pull request reviews, versioning).\nFig 2. Schematic overview of the developed cloud solution To use the application, the user uploads the GAMS fundamental model and parameter files to an S3 bucket, which automatically triggers a step function to start the flow. The first step submits an AWS Batch job containing the invest model, AWS Batch manages the required virtual machines and starts the docker image with the GAMS application. After the first calculation, the dispatch models are automatically submitted and run concurrently by AWS Batch. As the dispatch models require less CPU and memory, AWS Batch uses the already existing machines to run the models most of the time. If needed, more computation instances are created. Finally, when all jobs are finished, the resources are terminated automatically.\nCurrently, on-demand instances are used, which already results in a vast cost reduction over the existing solution. Further cost-optimization by using spot instances is planned in the future: Spot instances can be much cheaper than equivalent on-demand instances, however there is no guarantee that they run continuously, so this will require the implementation of a retry mechanism, if a premature termination occurs.\nCost AWS Batch dynamically combines multiple smaller instances on a single big machine and allocates intelligently the most appropriate EC2 instances to the GAMS worker.\nExample for costs: Invest model: requires approx. 700 GB RAM, but the number of cores is not relevant\nbest fit EC2 instance (768 RAM, 96 cores) \u0026lt; 10 USD/h\nDispatch model: each job requires ca. 30 GB RAM, the number of cores is not relevant\nbest fit EC2 instance (32 RAM, 8 cores) \u0026lt; 0,5 USD/h (multiplied by 20 years results into the same instance as the invest model concerning RAM and costs, which is automatically detected by AWS batch.)\nTotal: ~ 50-70 USD per complete run, depending on complexity; up to 200 runs p.a.\nThe software license is a GAMS application license that allows an unlimited number of runs of this particular application for a flat yearly fee.\nBenefits of the new solution SWM can better react to changes in electricity market conditions. This is particularly true for flexibility due to the EU Green Deal, changes in carbon taxation and fuel prices. Long term fundamental analysis does not mean to provide a price prognosis, but it shows the effects of changes in the premises on long term prices. It is the only way to understand and to quantify market risks that can occur. Combining all changes on the input side one obtains a lot of scenarios to be calculated. Using a hyper scale approach, the results are now available within one day, while using a physical server on premise it would take weeks. On top of the saved time, the cloud-based solution cut costs by up to 40 percent compared to on-premise solution.\nThe improved analytical skills in the evaluation of long-term energy prices gained by the ability to run many scenarios of the fundamental model help minimize cost in the acquisition process and make risks of the energy market manageable.\nAbout SWM Stadtwerke München (SWM), Munich\u0026rsquo;s municipal utilities company, is one of the largest energy and infrastructure companies in Germany. SWM supplies Munich 24/7 with energy (electricity, natural gas, district heating/cooling), fresh drinking water, mobility and advanced, cutting-edge telecommunication services. Sustainability and climate protection are essential cornerstones of their corporate policy in all sectors.\nAnnual Report 2019, Stadtwerke Munich (https://www.swm.de/dam/doc/english/swm-annual-report.pdf )\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nMonitoring Report 2019, Bundesnetzagentur, the German regulatory authority for the energy market (https://www.bundesnetzagentur.de/SharedDocs/Pressemitteilungen/EN/2019/20191127_Monitoringbericht.html )\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","excerpt":"Stadtwerke München have recently lifted their computationally expensive GAMS model into the cloud. This white paper gives a high level overview of the techniques used and the benefits of a cloud deployment over a traditional on-premise solution.","ref":"/stories/swm/","title":"Transitioning of a Large Electricity Model into the Amazon Cloud"},{"body":"MATLAB is a de-facto standard in many fields of science and engineering and is a good combination with GAMS. Until now, MATLAB users had to use the GDXMRW suite of utilities for interfacing with GAMS. However, for other programming languages such as Java or Python, we have been providing more advanced object oriented APIs since 2012. These APIs offer some real benefits for controlling and interacting with GAMS.\nWe are proud to now also announce the MATLAB version of the object oriented API, which ships with GAMS 34 and allows:\nExchange of input data and model results with in-memory representation of data (GAMSDatabase) Creation and running of GAMS models (GAMSJob) Customizing GAMS options (GAMSOptions) Efficiently solving sequences of closely related model instances (GAMSModelInstance) The new API works both with Matlab, as well as GNU Octave, release 5.2 or newer.\nHere is a simple example % create workspace ws = GAMS.GAMSWorkspace(); % create GAMSJob \u0026#39;t\u0026#39; from \u0026#39;trnsport\u0026#39; model in GAMS Model Libraries t = ws.addJobFromGamsLib(\u0026#39;trnsport\u0026#39;); % run GAMSJob \u0026#39;t\u0026#39; t.run(); % retrieve GAMSVariable \u0026#39;x\u0026#39; from GAMSJob\u0026#39;s output databases for x = t.outDB.getVariable(\u0026#39;x\u0026#39;).records fprintf(\u0026#39;x(%s,%s): level=%g marginal=%g\\n\u0026#39;, x{1}.keys{:}, x{1}.level, x{1}.marginal); end More examples: /34/docs/apis/examples_matlab/files.html Feel free to play around with the new API and let us know what you think on Twitter.\n","excerpt":"We are happy to announce our new object oriented MATLAB API.","ref":"/blog/2021/02/a-new-object-oriented-matlab-api/","title":"A new object oriented MATLAB API"},{"body":"","excerpt":"","ref":"/categories/matlab/","title":"MATLAB"},{"body":" GAMS Engine GAMS Engine is a server software that allows you to run GAMS models on centralised compute resources, either on-premise or in the cloud. Engine is accessed via a gateway service (the \"broker\" in Engine terminology) which provides a REST API. The broker accepts jobs sent from a range of clients such as GAMS Studio, MIRO Desktop, MIRO Server, or custom clients written in Python, Java, or other programming languages supported by the OpenAPI standard). It also provides a simple web user interface, which allows submitting jobs and user administration. Jobs submitted via the broker are placed in a queue, and from there they are assigned to available GAMS workers. Results from the workers are collected by the broker and made available to the user. Advantages of using Engine Several benefits make the centralized computing approach attractive for organizations: Computational Power Server Hardware has faster CPUs and more RAM than typical PCs and Laptops Schedule resource hungry optimization jobs and use personal computers for other tasks Run multiple big optimization jobs in parallel Administration Centralized license administration, no limits on number of users Allow people outside your organization to access your Engine Instance Ensure that everyone uses the same model version Control access with granular permissions granted to users, groups, and namespaces Less IT Cost Engine saves time and is ready to go in minutes No need to implement your own scheduling mechanism for GAMS jobs With GAMS Engine One, you can host your own server hardware With GAMS Engine SaaS, you don't even need to run a server. We make sure you have access to the right resources, any time. GAMS Engine Flavours You can get Engine in two different flavours. Engine One Engine One runs on your own hardware, either on premise or hosted in the cloud. Installation is done within a few minutes, and rolling updates keep everything current and secure. Engine One runs in conjunction with a GAMS machine based license, which has to be purchased separately. The number of GAMS workers can be set freely to any value during installation, based on the expected typical workload and the available hardware. Engine One is the perfect solution for organizations that require full control over their data. Learn more about Engine One and how to install it at the documentation center: /engine/\nEngine SaaS If you do not want to deal with the hassle of running and maintaining your own server, Engine SaaS is right for you. With Engine SaaS, we make sure you have access to the required resources at all times. We host Engine SaaS on the AWS cloud infrastructure. With practically limitless horizontal scaling you can start as many parallel jobs as required. Make sure to read this article to learn more about Engine SaaS and how to interact with the system. Please email us at sales@gams.com if you would like to discuss how GAMS Engine might fit into your organization, or to request a detailed quote or an evaluation copy. ","excerpt":"\u003csection\u003e\n \u003cdiv class=\"full-width\"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid\"\u003e\n \u003cdiv class=\"container\"\u003e\n \u003ch1 class=\"display-4\"\u003eGAMS Engine \u003c/h1\u003e\n \n \u003cp class=\"lead\"\u003e\n GAMS Engine is a server software that allows you to run GAMS models on centralised compute resources, either on-premise or in the cloud. \n \u003c/p\u003e\n\n \u003chr\u003e\n \n \u003cp\u003e\n Engine is accessed via a gateway service (the \"broker\" in Engine terminology) which provides a \u003ca href=\"https://www.smashingmagazine.com/2018/01/understanding-using-rest-api/\"\u003eREST API\u003c/a\u003e.\n \n \u003c!-- \u003c/p\u003e\n \n \u003cp\u003e --\u003e\n \n The broker accepts jobs sent from a range of clients such as\n \u003ca href=\"/latest/docs/T_STUDIO.html\"\u003eGAMS Studio\u003c/a\u003e, \n \n \u003ca href=\"/miro/index.html\"\u003eMIRO Desktop\u003c/a\u003e, \n \n \u003ca href=\"/miro/server.html\"\u003eMIRO Server\u003c/a\u003e, or \n \n \u003ca href=\"/engine/clients.html#custom-clients\"\u003ecustom clients\u003c/a\u003e\n \n written in Python, Java, or other programming languages supported by the OpenAPI standard). It also provides a \n \n \u003ca href=\"/engine/clients.html#engine-ui\"\u003esimple web user interface\u003c/a\u003e, which allows submitting jobs and user administration.\n \u003c/p\u003e","ref":"/sales/engine_facts/","title":"Engine Facts"},{"body":" GAMS MIRO GAMS MIRO (Model Interface with Rapid Orchestration) is a solution that makes it easy to turn your GAMS models into interactive end user applications that you can distribute to your colleagues or host online. The user friendly interface allows you to interact with the underlying GAMS model, quickly create different scenarios, compare results and much more. MIRO's extensive data visualization capabilities provide you with the ability to create powerful charts, time series, maps, widgets, etc. with ease. You can create your first application within minutes and then develop it step by step. No programming knowledge is required, the focus is rather on providing a wide range of configuration options. If the configuration options are not sufficient for you, you can become creative yourself and implement your own ideas. Our example gallery will give you an idea of what's possible. Have a play! Go to gallery Why MIRO? GAMS MIRO's features open up entirely new opportunities to share and distribute your models: Graphics Render GAMS Symbols with configurable standard graph types (Bar Charts, Scatter Plots, Pie Charts, etc) Slice and dice data across different dimensions using the powerful Pivot tool Save and retrieve visualizations as views Scenario Handling Input and output data of individual model runs is stored in a database that comes with MIRO Tags allow categorization and easy retrieval of specific runs Hypercube jobs allow running multi-dimensional parameter sweeps Customizability MIRO is open source and based on R-Shiny. Functionality can be extended by writing custom R code, e.g. to create complex data dashboards Deployment Options Depending on your needs, you can deploy GAMS MIRO in different ways. MIRO Desktop A fully local installation is ideal for development work and single users. GAMS, solvers and MIRO all run on your local machine, no internet connection required. MIRO SaaS A hosted solution, accessible from any modern web browser, including mobile devices. GAMS manages all required hardware in highly scalable data centers, making this solution ideal for deployments with multiple end users, anywhere in the world﹡. ﹡MIRO Server can also be fully self-hosted. This will also require an installation of GAMS Engine as the compute backend. Email us at sales@gams.com if you would like to discuss how GAMS MIRO might fit into your organization, or to request a detailed quote or an evaluation copy. ","excerpt":"\u003csection\u003e\n \u003cdiv class=\"full-width\"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid\"\u003e\n \u003cdiv class=\"container\"\u003e\n \u003ch1 class=\"display-4 mt-3\"\u003e\n GAMS MIRO \n \u003c!-- \u003cimg src=\"../miro_logo.png\" height=\"64px\"\u003e --\u003e\n \u003c/h1\u003e\n \n \n \u003cp class=\"lead\"\u003e\n GAMS MIRO \u003cem\u003e(Model Interface with Rapid Orchestration)\u003c/em\u003e is a solution that makes it easy to turn your GAMS models into interactive end user applications that you can distribute to your colleagues or host online. \n \u003c/p\u003e\n\n \u003chr\u003e\n \n \u003cp\u003e\n The user friendly interface allows you to interact with the underlying GAMS model, quickly create different scenarios, compare results and much more. MIRO's extensive data visualization capabilities provide you with the ability to create powerful charts, time series, maps, widgets, etc. with ease. \n \u003c/p\u003e","ref":"/sales/miro_facts/","title":"MIRO Facts"},{"body":"Introduction Kicker Manager Interactive is an indispensable tool to prove to your friends and colleagues that you have superior soccer knowledge/expertise. Since we at GAMS are also part of this battle, we decided to take our Kicker Manager game to the next level by doing what we do best! Optimize!\nFirst of all, the rules of the game: each participant has the task of forming a team from the active players of the Bundesliga (the German Soccer League). Good ratings (by the Kicker magazine), scoring goals or giving assists earn a player points. Negative points are awarded for bad ratings or red cards. The goal is for your team\u0026rsquo;s players to score more points than the opponent\u0026rsquo;s. As in the world of mathematical optimization, soccer managers are also bound by constraints. For example, your team must consist of 22 players, the number of players per position is predetermined, you can have a maximum of 4 players from the same Bundesliga team in your squad, and the cumulative market value of your team must not exceed 42.5 million euros.\nTo be able to fix bad player acquisitions, there is a transfer window in the winter break of the season. In this transfer period, a total of four players of the own team can be sold and replaced by new players.\nSounds like a task where an optimization model could support us? We thought so, too! Thus, we went to the drawing board and developed a model formulation for this Knapsack problem, quickly followed by a GAMS model and a MIRO app to interact with it. And here it is! Model formulation (partial) A common tactic is to build a squad of 11 or more (in case one of the core players is injured or substituted) expensive core players (\u0026quot;starters\u0026quot;) and fill it up with cheap ones (\u0026quot;substitutes\u0026quot;). The task of assembling a good team can be modeled as a Mixed Integer Program. The key decisions are whether a player $p$ is selected or not (modeled by a binary variable $X_p$) and if a selected player belongs to the so-called starters (modeled by a binary variable $S_p$). The difference between starters and substitutes is that the expected points of starters contribute to the objective with weight 1, while the expected points of the substitutes can be weighted with an individual factor $wsubst$ (usually much smaller than 1), depending on how much importance you give them:\n$$ \\max \\sum_p(pts_p \\cdot S_p) + \\sum_p(pts_p \\cdot (X_p - S_p)) \\cdot wsubst $$\nThe rules of the manager game, such as the number of players to be selected for each position (3 goalkeepers, 5 defenders, 8 midfielders and 6 strikers), can be formulated as constraints: $$ \\sum_{p \\in \\mathcal{P_{goal}}} X_p = 3,\\ \\ \\sum_{p \\in \\mathcal{P_{def}}} X_p = 6,\\ \\ \\sum_{p \\in \\mathcal{P_{mid}}} X_p = 8\\ \\ \\sum_{p \\in \\mathcal{P_{fwd}}} X_p = 5\\ $$ where $\\mathcal{P_{goal}}, \\mathcal{P_{def}}, \\mathcal{P_{mid}}, \\mathcal{P_{fwd}}$ are sets of all available goalkeepers, defenders, midfielders and forwards.\nAt this point, we skip on covering the full algebraic model in detail and refer to the source code for further details.\nPlanning transfers The current Bundesliga season is already underway. Even though you can still create a new team, you won\u0026rsquo;t be able to catch up with the managers who created a team before the season started. However, if you already have a team, you can exchange up to four players thanks to the transfer window that has just opened. So let\u0026rsquo;s try to pimp our teams by optimizing transfers!\nLet\u0026rsquo;s start with Robin\u0026rsquo;s team: unlike Fred and Lutz, who meticulously analyzed the strengths and weaknesses of each player over an extensive period of time before forming their teams, he went with his gut and quickly assembled a team of 11 players who he believed could make it big! While this approach brought him much success last season, he is lagging behind this one. Therefore, it is crucial for him to use the transfer window to his advantage. Still, he sticks to his approach of not spending much time on his decision. With the new tool at hand, he simply uses the player\u0026rsquo;s statistics from the first half of the season and lets the optimization model find the best transfers for him:\nFig. 1 Price (in million €) and Points/100,000€ of players bought/sold when using fully automated approach (Robin)\nFig. 2 Cumulative price (in million €) and Points/100,000€ over all players bought/sold\nNote that the total price of sold players is slightly higher than that of purchased players, but the points/100,000€ are much higher for purchased players than for those that are sold.\nFred, who is the author of the Football Manager Optimizer, follows a semi-automated/interactive approach. He uses the points players made in the first half of the season, but adjusts them based on his predictions. He fixes players he is especially confident in, computes multiple teams with similar objective function value, compares them and repeats the process until he is happy with what he sees.\nLutz currently leads the scoreboard with 821 points, compared to Fred with 645 and Robin with 619. He spends a lot of time analyzing players and teams, far more than the other two, and doesn\u0026rsquo;t believe optimization can help him with this highly emotional process. So he sticks with his manual approach and has come up with the following transfers:\nFig. 3 Price (in million €) and Points/100,000€ of players bought/sold when using semi-automated approach (Fred)\nFig. 4 Price (in million €) and Points/100,000€ of players bought/sold when using manual approach (Lutz)\nIt remains to be seen which approach is going to be the most successful!\nThe ultimate team Now that the first half of the season is over, wouldn\u0026rsquo;t it be interesting to know what would have been the best possible team in hindsight? Thanks to the Football Manager Optimizer this is super easy to calculate. We set the number of starters to 11, run the model and get the following team:\nFig. 6 Points of players in Football Manager Optimizer ultimate team\nTheoretically, there could be even better teams if we take into account that on certain match days one of our substitutes might have scored more points than one of our starters, but that will be pretty close to the global optimum.\nOur team would then have reached a total of 1,222 points. The current leader in the nationwide standings has 993 points. Not bad! Interestingly, only 5 players from the team of the current leader and the ultimate team of Football Manager Optimizer overlap!\nDo you also want Football Manager Optimizer to support you with your transfer decisions? Give it a try !\n","excerpt":"Are you looking for a tool to improve your soccer manager skills? The new MIRO app \u0026ldquo;Football Manager Optimizer\u0026rdquo; could be what you\u0026rsquo;ve been looking for!","ref":"/blog/2021/01/football-manager-optimizer/","title":"Football Manager Optimizer"},{"body":"","excerpt":"","ref":"/categories/knapsack/","title":"Knapsack"},{"body":"","excerpt":"","ref":"/authors/lkunz/","title":"Leonard Kunz"},{"body":" Area: Fashion Retail\nProblem class: MIP (Stock Redistribution)\nManaging Retail Stock Distribution at Premium Shoe Manufacturer Goertz Summary As a medium sized fashion producer and retailer, Goertz regularly faces the challenge of how to redistribute stock across 150 retail stores. A newly developed solution with a GAMS model at its core helps Goertz to intelligently redistribute stock multiple times during the sales season. With the new solution, Goertz has been able to increase stock availability and at the same time shave off an average of seven days of each redistribution cycle. Goertz technical staff was able to independently implement the model in less than two months.\nIntroduction Goertz is a traditional premium shoe manufacturer, founded in 1875 in the city of Hamburg in Northern Germany, where it is still headquartered today. In addition to designing and manufacturing shoes, Goertz also operates its own chain of 150 retail stores throughout Germany, with over 3000 employees. Goertz offers around 7000 different styles of shoes for sale. Like most fashion products, the produced styles change between seasons and from year to year. Given this fluctuating product portfolio, it makes no economic sense to hold large volumes of stock in a central warehouse, because the chances of not selling all sizes of a particular style of shoe by the end of a season are great. Instead, Goertz uses its 150 retail stores as a large, distributed warehouse, where the complete inventory is always on display.\nThe Problem Shoe styles are present in stores at the beginning of a season without any gaps in sizes. However, those gaps accumulate throughout the season, as more and more pairs of shoes are sold. Once more than two sizes of a style have been sold out in a particular store, the whole style of shoes is flagged as \u0026ldquo;incomplete\u0026rdquo; for that store in the Enterprise Resource Planning (ERP) system. Keeping the number of these incomplete styles as small as possible is both in the interest of the retailer as well as in the interest of the customers.\nGoertz have developed a complex redistribution scheme to achieve this goal of minimal gaps throughout the sales season. In this scheme, stores send the remaining sizes of an incomplete style back to the central distribution warehouse. After processing in the warehouse, individual sizes of shoes are then redistributed to stores with demand, prioritizing stores with a high sales probability throughout the remainder of the sales season. This scheme allows the company to react to the unforeseeable nature of the sales success of different shoes in different stores, but it is costly and slow. Transport, inventorization, and redistribution add expense, and the typical turnaround time for shoes undergoing this redistribution scheme is between 7﹘14 days, during which the affected shoes cannot be sold.\nFig. 1 Redistribution system for one shoe style involving a central warehouse. Remaining sizes of incomplete styles are sent from poorly performing stores (red) to the central warehouse. After sorting and storing, missing sizes are sent from the warehouse to stores performing well for the style (green). After the redistribution, poorly performing stores have no inventory of the style left, while high performance stores have filled up their gaps. Transport and handling overhead is high in this solution.\nThe challenge of managing stock levels across multiple locations in a more efficient way had Goertz look into mathematical optimization techniques. The aim was development of a new solution to the problem that would maintain or exceed their high standards, while at the same time minimizing cost and turnaround time.\nGiven the fact that the central warehouse was the point which introduced most of the cost and delays, the experts at Goertz developed an idea to cut out the warehouse and instead implement a direct store-to-store redistribution system. With 7000 different styles in 150 stores, the number of possible combinations for redistributing shoes between stores is extremely high. But with a well defined set of business constraints, a mixed integer problem could be established in GAMS to solve the problem:\nOptimization Objectives Maximize overall availability of complete size ranges Across all stores, the number of incomplete styles should be as small as possible after redistribution. Minimize the total transport cost The number of individual deliveries should be as small as possible. Prioritise high volume stores Depending on their location, some stores perform better than others for certain styles. The redistribution scheme prioritizes those stores which are more likely to sell particular styles during the remainder of the sales season. Business Constraints Avoid incomplete size ranges If an incomplete style is sent out from a store, then all sizes should be sent out to clear the shelf space. Avoid very small transports Only schedule a transport between two stores, if at least five pairs of shoes can be sent. Limit burden on the sending stores Since the redistribution has to be handled by store staff in addition to their other duties, each origin store must send to 10 different destination stores at most. Fig. 2 Store-to-store redistribution system. In this solution, involvement of the central warehouse is avoided. This saves transit and handling and reduces the typical turnaround by approximately 7 days. Implementation For selecting an optimization solution, a key consideration for Goertz was ease of integration into their existing IT infrastructure. Here, GAMS\u0026rsquo; flexibility and range of available interfaces is a great advantage. To integrate GAMS with as little friction as possible, a set of R routines was written, which pull data from the different company planning and warehousing databases for pre-processing. Processed data is then passed to the developed GAMS model via GDX files. The results from the GAMS run are then read back into R and after checking for plausibility fed into the ERP system. The ERP system in turn generates the individual transport instructions for each store.\nThe core of the solution is a mixed integer model formulated in GAMS. This model can be solved by CPLEX in approximately 90 minutes on standard workstation hardware (3 CPUs, 32 GB RAM) for the complete Goertz inventory.\nFig 3. Schematic representation of the optimization solution. The GAMS model is called by a set of custom R scripts which pull and sanitise data from the various company databases and then feed the data into the optimization. The results are again collected by R scripts and transferred to the ERP system, which in turn generates the individual transport instructions. Results The optimization results allowed Goertz to dramatically increase the number of fully available shoe styles across all stores: A redistribution of around 6% of the total stock between stores resulted in an increase of fully available styles by 22%. Figure 4 shows the results for one style of shoe. Because of the low store-to-store shipping volumes, a standard courier service can be utilised for transportation.\nContrary to expectations, each store benefits from the scheme, even those which have to send out more shoes than they receive, because in the end even they have more complete styles than before. In summary, a successful solution was implemented by the team of Goertz experts in a short period of time. All project goals and objectives have been achieved, with the redistribution time reduced from 7-14 days to 2-5 days, thus increasing the opportunities for sales.\nFig 4. Effect of the direct store-to-store redistribution scheme on availability of one product style across all stores. Before the redistribution (left), a highly fragmented availability of the style across all stores is apparent. After redistribution, the number of stores with incomplete availability of the style has been reduced from 80 to 18. About Goertz Goertz is a traditional premium shoe manufacturer, founded in 1875 in the city of Hamburg in Northern Germany, where it is still headquartered today. In addition to designing and manufacturing shoes, Goertz also operates its own chain of 150 retail stores throughout Germany, with over 3000 employees.\nhttps://www.goertz-corporate.de ","excerpt":"As a medium sized fashion producer and retailer, Goertz regularly faces the challenge of how to redistribute stock across 150 retail stores. A newly developed solution with a GAMS model at its core helps Goertz to intelligently redistribute stock multiple times during the sales season. With the new solution, Goertz has been able to increase stock availability and at the same time shave off an average of seven days of each redistribution cycle.","ref":"/stories/goertz/","title":"Managing Retail Stock Distribution at Premium Shoe Manufacturer Goertz"},{"body":"","excerpt":"","ref":"/categories/issue/","title":"Issue"},{"body":"","excerpt":"","ref":"/categories/neos/","title":"NEOS"},{"body":"The Problem On December 8th the group operating the free NEOS service announced on Twitter that from January this year users are required to provide a valid email address for all job submissions to NEOS.\nhttps://neos-guide.org/content/FAQ#email Unfortunately, this change affects GAMS users who submit NEOS jobs via Studio. We are working on resolving this issue with the coming release of GAMS 34.\nTemporary fix As a workaround you can submit jobs via Kestrel instead of directly via GAMS Studio, and provide an option file kestrel.opt. Provide your email in this file with the keyword email, as in the example snippet below:\n$echo email jdoe@jeangreyhigh.edu \u0026gt; kestrel.opt $echo kestrel_solver cplex \u0026gt;\u0026gt; kestrel.opt mymodel.optfile = 1; option solver=kestrel; solve mymodel using lp minimizing obj; This solution might not work in all circumstances though, because with Kestrel the GAMS execution phase happens locally, so you might run into license related limitations. For more information you can read this blog article. .\n","excerpt":"There is a temporary issue with submitting jobs to the NEOS server. Please read this if you are affected.","ref":"/blog/2021/01/neos-now-requires-a-valid-email-address/","title":"NEOS now requires a valid email address"},{"body":" We develop rock solid, scalable solutions to help you solve difficult optimization problems. Our products are successfully used in a wide range of businesses, covering areas such as Energy Production, Manufacturing, Logistics, Engineering, or Economics. GAMS is also used in research and education at many universities worldwide. We closely collaborate with students, teachers, and researchers interested in mathematical modeling and optimization. Our Products For Model Development Express your business optimization problems efficiently and solve them with best in class solvers. Read more For Model Deployment Deploy optimization solutions for your end users as interactive web applications.\nRead more For Centralized Hosting Host optimization solutions in your cloud of choice or on-premise.\nRead more GAMS Rooted in a number of a proven design principles, The General Algebraic Modeling System (GAMS) is an evolved and mature system that gives you access to cutting-edge modeling and optimization technology, and world-class technical support handled by our Ph.D. level optimization and modeling experts. Modelling Language GAMS is the ideal choice for domain experts who want a powerful, but still simple modeling language Integrated Solvers GAMS offers a uniform interface to all major commercial and academic solvers (30+ integrated). APIs GAMS applications can be connected to and embedded into other applications with our APIs for Python, C++, .NET and more. Used in thousands of businesses, governments, and research institutions worldwide, GAMS helps making smarter and faster decisions. Available on all major platforms. Try now! We have lots of examples from many business areas in our model library. Here are some simple ones to give you a first idea. Real world models can be a lot more complex!\nTransportation Planning Find the optimal way to distribute goods between different sites\nRoute Planning Find the shortest route through multiple cities\nProduction Planning Decide how much to produce based on demand and prices for production and stocking Scheduling Minimize total processing time for sequential assembly steps\nBlending Calculate the optimal mixture of different ingredients to produce a product of desired quality\nCutting Stock Minimise waste when cutting stock into smaller pieces\nMIRO GAMS MIRO is a deployment environment which allows you to easily turn your GAMS models into fully interactive applications. MIRO is designed for people looking for an easy and automated way to make their GAMS models available to end users. Extensive visualization options support you to make decisions based on optimization. See examples Try it! Engine GAMS Engine is our new solution for running GAMS jobs in cloud environments. It provides a REST API that you can connect to either via GAMS MIRO Desktop, via the GAMS Engine UI, via GAMS Studio or via any of the clients supported by OpenAPI (Python, Java, Javascript, C++, ...). GAMS Engine automatically schedules your jobs and assigns them to an available GAMS worker to solve it. It comes with a powerful user management system that allows you to restrict the activities of your users according to your organizational hierarchy. Read more Try it! ","excerpt":"\u003csection\u003e\n\n\u003cdiv class=\"text-center\"\u003e\n\n\u003cdiv class=\"full-width \"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid\"\u003e\n \u003cdiv class=\"container text-center\"\u003e\n\n \u003ch1 class=\"display-4\"\u003eWe develop rock solid, scalable solutions to help you solve difficult optimization problems. \u003c/h1\u003e\n \u003cp class=\"lead\"\u003eOur products are successfully used in a wide range of businesses, covering areas such as Energy Production, Manufacturing, Logistics, Engineering, or Economics. \u003c/p\u003e\n\n \u003cp class=\"lead\"\u003e\n GAMS is also used in research and education at many universities worldwide. We closely collaborate with students,\n teachers, and researchers interested in mathematical modeling and optimization. \n \u003c/p\u003e","ref":"/discover_gams/","title":"Discover GAMS"},{"body":"In November 2020, we hosted a virtual workshop at the Informs Virtual Conference. The Workshop was split up into three parts about GAMS, GAMS MIRO and GAMS Engine.\nIn this first part of the workshop Stefan explains the benefits of GAMS and demonstrates how to implement and solve a simple optimization problem.\nSpeaker Stefan Mann, Ph.D\nTechnical Sales Engineer, GAMS Software GmbH\nStefan holds a PhD degree in Biochemistry, and before joining GAMS he worked as a research scientist in the field of cardiac electrophysiology. During that time he made extensive use of optimization techniques while working with mathematical models of the electrical activity of heart cells.\nAfter a couple of years running the electrophysiology lab of a small biotech company, he joined GAMS. With his passion for science and technology, he now manages the marketing activities at GAMS and participates in customer projects.\n","excerpt":"\u003cp\u003eIn November 2020, we hosted a virtual workshop at the Informs Virtual Conference.\nThe Workshop was split up into three parts about GAMS, GAMS MIRO and GAMS Engine.\u003c/p\u003e\n\u003cp\u003eIn this first part of the workshop Stefan explains the benefits of GAMS and demonstrates how to implement and solve a simple optimization problem.\u003c/p\u003e","ref":"/webinars/informs-workshop_part1_gams/","title":"Informs 2020 Workshop - Part 1: Introduction to GAMS"},{"body":"In November 2020, we hosted a virtual workshop at the Informs Virtual Conference. The Workshop was split up into three parts about GAMS, GAMS MIRO and GAMS Engine.\nIn this second part of the workshop Freddy explains how to turn a model into a GAMS MIRO app and how to deploy it.\nSpeaker Frederik Proske\nOperations Research Analyst, GAMS Software GmbH\nFrederik Proske holds a B.Sc. and M.Sc. in Engineering and Business Administration from the University of Hannover, where he also taught students concepts of Operations Research for several years.\nIn 2016 he joined GAMS as Operations Research Analyst. In this role, he is responsible for software development and project management in the area of mathematical programming. His core competencies are projects in the field of operations research - mostly scheduling problems - that provide customers with powerful optimization software.\nSince 2018, he has been the lead engineer for GAMS MIRO, a tool that allows you to automate the use of your GAMS models. He regularly gives lectures at universities and international conferences.\n","excerpt":"\u003cp\u003eIn November 2020, we hosted a virtual workshop at the Informs Virtual Conference.\nThe Workshop was split up into three parts about GAMS, GAMS MIRO and GAMS Engine.\u003c/p\u003e\n\u003cp\u003eIn this second part of the workshop Freddy explains how to turn a model into a \u003ca href=\"http://gams.com/miro\" target=\"_blank\"\u003eGAMS MIRO\u003c/a\u003e\n app and how to deploy it.\u003c/p\u003e","ref":"/webinars/informs-workshop_part2_miro/","title":"Informs 2020 Workshop - Part 2: Model Deployment with MIRO"},{"body":"In November 2020, we hosted a virtual workshop at the Informs Virtual Conference. The Workshop was split up into three parts about GAMS, GAMS MIRO and GAMS Engine.\nIn this third part of the workshop Freddy explains what GAMS Engine is, and demonstrates the various ways to interact with Engine, especially how to use the Python API to solve a typical two part energy model with an \u0026ldquo;Invest\u0026rdquo; and a \u0026ldquo;Dispatch\u0026rdquo; component.\nApologies for the bad audio at the beginning. It gets better after a few seconds!\nSpeaker Frederik Proske\nOperations Research Analyst, GAMS Software GmbH\nFrederik Proske holds a B.Sc. and M.Sc. in Engineering and Business Administration from the University of Hannover, where he also taught students concepts of Operations Research for several years.\nIn 2016 he joined GAMS as Operations Research Analyst. In this role, he is responsible for software development and project management in the area of mathematical programming. His core competencies are projects in the field of operations research - mostly scheduling problems - that provide customers with powerful optimization software.\nSince 2018, he has been the lead engineer for GAMS MIRO, a tool that allows you to automate the use of your GAMS models. He regularly gives lectures at universities and international conferences.\n","excerpt":"\u003cp\u003eIn November 2020, we hosted a virtual workshop at the Informs Virtual Conference.\nThe Workshop was split up into three parts about GAMS, GAMS MIRO and GAMS Engine.\u003c/p\u003e\n\u003cp\u003eIn this third part of the workshop Freddy explains what \u003ca href=\"http://gams.com/engine\" target=\"_blank\"\u003eGAMS Engine\u003c/a\u003e\n is, and demonstrates the various ways to interact with Engine, especially how to use the Python API to solve a typical two part energy model with an \u0026ldquo;Invest\u0026rdquo; and a \u0026ldquo;Dispatch\u0026rdquo; component.\u003c/p\u003e","ref":"/webinars/informs-workshop_part3_engine/","title":"Informs 2020 Workshop - Part 3: solving models in the cloud with GAMS Engine"},{"body":"With GAMS MIRO 1.1 the MIRO Pivot renderer was introduced. With the new update to MIRO 1.2 this powerful tool can now also be used to enter and edit input data for your model. By double-clicking on a cell you can change the current value. When you add a new row to the table, you can either select existing UELs from the drop-down list or enter a new value.\nFurther, you can use all the features that were introduced with MIRO 1.1 such as exploring your data by aggregating over variables and calculating descriptive measures such as the mean, median etc. The MIRO Pivot renderer also allows data to be displayed in the form of (stacked) bar charts, line charts and radar charts. But that’s not all: At any time, you can save a configuration of your Pivot Table as a so called view.\nWhat is a view? The technical answer is that a view is a configuration of a renderer that can be exported as a JSON file. To get a more intuitive understanding of what a view is, the following GIFs give an example to guide you through the most important aspects. By dragging and dropping domains you can filter and aggregate input and output data. In the video below you can see how a few actions draw a bar chart of the different commodities by year. By clicking the Add View button we can attach a view to the current scenario. This allows us to return to a configuration later in our analysis.\nYou can import, download, and remove views by navigating to the Edit Metadata -\u0026gt; Views dialog, which can be found under the Scenario menu in the upper right corner. When you click the Export button, the selected views are saved as a JSON file in your download folder, which you can use to share them with others.\nWe have extended the TIMES MIRO app introduced in our previous blog post by these and other new features introduced with MIRO 1.2. Give it a try !\n","excerpt":"The MIRO Pivot renderer was introduced with MIRO 1.1. The TIMES MIRO app used this tool extensively. With GAMS MIRO 1.2 we have further improved it!","ref":"/blog/2020/11/miro-1.2-improving-times-miro/","title":"MIRO 1.2 - Improving TIMES MIRO"},{"body":"Our demo licensing scheme allows you to try GAMS and the solvers we package in the distribution for free, but there are certain limits with regards to model sizes. GAMS itself enforces the following limits:\n2000 variables and 2000 constraints for linear (LP, RMIP, and MIP) models with a demo license 1000 variables and 1000 constraints for all other model types with a demo license 5000 variables and 5000 constraints for linear (LP, RMIP, and MIP) models with a community license 2500 variables and 2500 constraints for all other model types with a community license In addition to the GAMS model size limits, the solvers might impose stricter limits when running with a demo or community license. Detailed information about those limitations can be found in our documentation .\nFor those who would like to solve larger models without paying any license fees, there is a way now: NEOS server. In a nutshell, NEOS is a free online service for solving numerical optimization problems, hosted by the Wisconsin Institute for Discovery at the University of Wisconsin in Madison. NEOS integrates a range of premium solvers, such as CPLEX, MOSEK, KNITRO, XPRESS, and GUROBI.\nThe following caveats apply:\nYour optimization job must not contain confidential information or trade secrets (see the NEOS Terms of Use ) You are only allowed to use NEOS for \u0026ldquo;academic, non-commerical research purposes\u0026rdquo;, at least when using the commercial solvers. Your job will be queued, and you might have to wait a little while before getting optimization results; the wait is typically only on the order of minutes. If you can live with these restrictions, you will find that running NEOS jobs with GAMS STUDIO is easy! The following works with Studio in GAMS 32.2.0 or later.\nLet\u0026rsquo;s use the ALUM model from the GAMS model library as an example. GAMS produces the following model statistics for this MIP:\nGAMS 32.2.0 rc62c018 Released Aug 26, 2020 WEX-WEI x86 64bit/MS Windows - 08/28/20 11:14:30 Page 22 World Aluminum Model (ALUM,SEQ=31) Model Statistics SOLVE gam Using MIP From line 1493 MODEL STATISTICS BLOCKS OF EQUATIONS 24 SINGLE EQUATIONS 928 BLOCKS OF VARIABLES 23 SINGLE VARIABLES 3,475 NON ZERO ELEMENTS 12,317 DISCRETE VARIABLES 172 If you try to solve this locally with a demo license, you will get an error message:\nGAMS 32.2.0 Copyright (C) 1987-2020 GAMS Development. All rights reserved Licensee: GAMS Demo license for Stefan Mann G200605|0002CO-GEN GAMS Software GmbH, Germany DL011603 c:\\gams\\licenses\\demo.lic smann@gams.com, Stefan Mann Demo license for demonstration and instructional purposes only --- Starting compilation --- alum.gms(1632) 3 Mb --- Starting execution: elapsed 0:00:00.012[LST:1704] --- alum.gms(1491) 5 Mb --- Generating MIP model gam[LST:10519] --- alum.gms(1495) 6 Mb --- 928 rows 3,475 columns 12,317 non-zeroes --- 172 discrete-columns *** The model exceeds the demo license limits for linear models of more than 2000 rows or columns *** Status: Terminated due to a licensing error *** License file: c:\\gams\\licenses\\demo.lic *** Inspect listing file for more information --- Job alum.gms Stop 08/28/20 12:16:44 elapsed 0:00:00.149 Don\u0026rsquo;t fret, you can solve this model on NEOS. To do so, first choose a solver (NEOS defaults to BDMLP , if no solver option is given). The option has to be added to the model file; options supplied in the GAMS parameter editor in STUDIO are currently ignored when submitting NEOS jobs:\noption mip=cplex Then, in Studio, select GAMS \u0026gt; Run NEOS - Short from the menu. This will automatically establish a connection with NEOS, add your model to the queue, and collect the results once they are ready. The whole process is totally seamless, and requires no configuration. If you expect your job to run for longer than 5 minutes, select Run NEOS - long. This will put your job into a different queue, where jobs do not get killed automatically after 5 minutes, but you also will not get any intermediate log output during the NEOS run.\nThe solve summary shows that CPLEX was indeed used to solve our problem on NEOS:\nS O L V E S U M M A R Y MODEL gam OBJECTIVE phi4 TYPE MIP DIRECTION MINIMIZE SOLVER CPLEX FROM LINE 1496 **** SOLVER STATUS 1 Normal Completion **** MODEL STATUS 8 Integer Solution **** OBJECTIVE VALUE 49563.9048 RESOURCE USAGE, LIMIT 0.220 10000000000.000 ITERATION COUNT, LIMIT 1686 2147483647 IBM ILOG CPLEX 32.2.0 rc62c018 Released Aug 26, 2020 LEG x86 64bit/Linux --- GAMS/Cplex licensed for continuous and discrete problems. Cplex 12.10.0.0 Space for names approximately 0.13 Mb Use option \u0026#39;names no\u0026#39; to turn use of names off MIP status(102): integer optimal, tolerance Cplex Time: 0.20sec (det. 158.45 ticks) Fixing integer variables, and solving final LP... Fixed MIP status(1): optimal Cplex Time: 0.01sec (det. 13.31 ticks) Solution satisfies tolerances. MIP Solution: 49563.904825 (1116 iterations, 42 nodes) Final Solve: 49563.904825 (570 iterations) Best possible: 49559.722453 Absolute gap: 4.182372 Relative gap: 0.000084 That is all you need to solve a model on NEOS, using any of the available commercial solvers.\nBrief Technical Background A GAMS job is always split into different phases:\nDuring the compilation phase GAMS analyses the model code and generates a restart file (the equivalent of an \u0026ldquo;object file\u0026rdquo; in a language such as C++), which contains lower level instructions. Importantly, any dollar control options are executed during this phase. During the execution phase, the restart file generated before is read in and executed. When you run a GAMS job on NEOS with Studio, the compilation phase happens locally on your own computer. The resulting restart file is then copied onto NEOS Server, and executed there in a temporary directory. All files produced in that directory during the GAMS run (.lst, .log, put files\u0026hellip;) are collected and transferred back to your local machine.\nNote how this process is different from using Kestrel, which also allows you to solve GAMS jobs on NEOS server. With Kestrel, both the GAMS compilation and execution phases happen on your local machine, and only the (potentially large) solver work file is copied to NEOS and fed to the solver. Since the GAMS execution phase happens locally, the size restrictions of demo installations apply here.\nTechnical Limitations For security reasons, NEOS jobs are executed under execmode=3 (more info in our documentation ), which means you cannot use any executes, embedded code, or put statements above the working directory You cannot upload supplementary files. As a consequence, you cannot use something like execute_load xxx.gdx. Also, if you need a solver option file, it has to be created on the fly with put. Consider this example for generating a CPLEX option file: file fopt /cplex.opt/; putclose fopt \u0026#39;startalg 4\u0026#39; / \u0026#39;mipemphasis 2\u0026#39;; ","excerpt":"GAMS and the packaged solvers impose restrictions with regards to the problem sizes that can be solved with a free demo or community license. For academic users, there is a free alternative to run large models: \u003cstrong\u003eNEOS server\u003c/strong\u003e.","ref":"/blog/2020/11/running-large-models-on-neos-for-free/","title":"Running large models on NEOS for free"},{"body":" Area: scheduling Problem class: MIP, MINLP\nScheduling at the United States Military Academy At the United States Military Academy (USMA) in West Point, the academic program is uniquely designed around the requirement that all students must graduate in four years, a total of eight academic semesters or terms (8TAP = eight term academic program). Adding to the unique character of USMA is the fact that each student\u0026rsquo;s daily activities are a carefully regimented balance of academic, military, and physical requirements. The ~4,500 enrolled cadets compile their individual 8TAPs which results in challenging scheduling problems. Hence, USMA relies on a highly customized decision support system, based on GAMS, to address different scheduling tasks like\nregular term scheduling term-end exam scheduling and room scheduling. All of these scheduling tasks deal with multiple (often competing) objectives that need to be optimized under consideration of numerous business rules, which can be prioritized by the operator as needed. Given the uniqueness of the scheduling tasks at USMA, no off-the-shelf software is able to provide satisfactory results in reasonable time and is at the same time flexible enough to allow reacting to changed requirements due to unforeseen events like the Covid-19 pandemic.\nWith tailor-made scheduling applications based on GAMS, USMA has a powerful decision support system in place. The scheduling tools were developed by GAMS application specialists in close collaboration with the USMA registrar office and USMA’s Software Engineering Branch (SEB).\nIt is hard to imagine academic term scheduling, term-end exam scheduling and room scheduling here at the United States Military Academy without the robust tools and systems that GAMS has provided. When we reached out to GAMS (in late 2015) to discuss review and enhancement of the academic term and term-end exam scheduling processes that had been in place since 2000, they suggested that we consider taking a data-driven approach. This has allowed us to respond to required (and desired) changes in a timely and efficient manner. When we added room scheduling in 2018, the same approach was taken. This allowed us to schedule rooms in the Covid-19 environment by simply changing room capacities and moving out. Collaborating with GAMS is always a pleasure and the support is top-notch. Five stars!!!\nTechnical Implementation The USMA scheduling problems can be modelled as Mathematical Programs, e.g. Mixed Integers Programs (MIP) or Mixed Integer Nonlinear Programs (MINLP). Even with state-of-the-art solver technology the resulting models are often too complex to be solved in a monolithic approach. Hence, custom solution approaches that combine\ndecomposition methods heuristics multi-objective optimization and state-of-the-art MIP and MINLP solver technology have been implemented. Schematic view of the term scheduling algorithm Crucially, the implemented solutions are designed to support the scheduling workflow at USMA in the best possible way. While from a mathematical perspective, it is desirable to have a well defined problem and well defined data and then run the scheduler once, in practice scheduling is a multi week process that involves many interactive “negotiations” between the registrar, departments, and instructors concerning the course offering details like times, rooms, etc. Hence, in addition to “just” computing optimal schedules, the scheduling engines also support\nefficient computation of multiple alternative schedules such that the registrar can choose from a set of schedules fixing of partial schedules and a mechanism to control the trade-off between runtime and solution quality. The scheduling applications come with an interface designed to seamlessly integrate them into USMA’s IT Infrastructure.\nHow do the Scheduling applications support quick reactions to unforeseen events? An important aspect during the development of the USMA scheduling applications was to provide an optimal solution for today’s scheduling problems that is well prepared for the problems of tomorrow.\nUnforeseen events like the Covid-19 pandemic pose new challenges for the room scheduling where suddenly social distancing has to be considered. Rooms that had sufficient capacity for courses with a certain enrollment in the past are no longer suitable because they are too small to allow the cadets to keep the required distance. Thanks to the flexibility of the room scheduling engine such a new requirement was easy to add to the underlying models and allowed to quickly adjust the room schedule to the new situation.\nSocial distancing in class. Source: https://www.facebook.com/WestPointUSMA/photos/a.287346011231/10157312805956232/?type=3 A more enjoyable example for the flexibility of the scheduling examples was the Armed Forces Bowl 2017 . USMA’s football team, the Army Black Knights, qualified for the Bowl game which happened to take place during the already completely scheduled term-end exam week. On short notice, 566 individual exams of 141 affected cadets had to be rescheduled without changing any of the exams of unaffected cadets. Thanks to the flexibility of the term-end exam scheduler, this task has been accomplished with a minimum number of extra exam dates.\nSummary The GAMS USMA Scheduling applications…\nprovide tailor-made software solutions to a set of challenging scheduling problems consider multiple objectives and allow the operator to prioritize them as needed are seamlessly integrated into USMA’s IT infrastructure provide the flexibility to react to unforeseen events. About West Point The U.S. Military Academy at West Point\u0026rsquo;s mission is \u0026ldquo;to educate, train, and inspire the Corps of Cadets so that each graduate is a commissioned leader of character committed to the values of Duty, Honor, Country and prepared for a career of professional excellence and service to the Nation as an officer in the United States Army.\u0026rdquo;\nhttps://www.westpoint.edu/ ","excerpt":"At the United States Military Academy in West Point, all students must graduate in four years. Each student\u0026rsquo;s daily activities are a carefully regimented balance of academic, military, and physical requirements. The ~4,500 enrolled cadets compile their individual academic programs, which results in challenging scheduling problems. USMA relies on a highly customized decision support system, based on GAMS.","ref":"/stories/usma/","title":"USMA"},{"body":"Each year, GAMS donates EUR 500 to each of the three winners of the price for excellent Diploma- and Master Theses, organized by the German Society for Operations Research. This year\u0026rsquo;s awardees are:\nPia Ammann: An Adaptive Large Neighbourhood Search for a Real-World Multi-Attribute Vehicle Routing and Scheduling Problem\n(TU München, Supervisor: Prof. Dr. Rainer Kolisch) Erik Diessel: Risk Aware Flow Optimization and Application to Logistics Networks\n(TU Kaiserslautern, Supervisor: Prof. Dr. Sven Krumke) Sarah Roth: SAT Heuristics for the Periodic Event Scheduling Problem\n(FU Berlin, Supervisor: Prof. Dr. Ralf Borndörfer) Congratulations to all three of you!\nFor those interested in participating in next year\u0026rsquo;s round, all information to sign up is listed on the GOR homepage (in German).\n","excerpt":"Each year, GAMS donates EUR 500 to each of the three winners of the price for excellent Diploma- and Master Theses. Congratulations to this years winners!","ref":"/blog/2020/09/the-2020-gor-award-for-diploma-and-master-theses/","title":"The 2020 GOR Award for Diploma and Master Theses"},{"body":"For quite a while we have been providing the option to play around with GAMS in Jupyter notebooks in a hosted environment at https://jupyterhub.gams.com/ . That place is still a good resource for getting started and provides a few examples of how to use GAMS in this environment. [Note: jupyterhub.gams.com has been shut down in March 2023]\nWe have received a lot of requests to make this available for local installation, and we listened. Without further ado, here is how to get started:\nInstallation Windows Install a python environment, if you do not have one already. We will use miniconda, which you can download at https://docs.conda.io/en/latest/miniconda.html . Open the Anaconda Prompt run conda create -n gmsjupyter python=3.8. This will create a new environment for you and install needed dependencies. Activate the new environment with conda activate gmsjupyter. Install jupyterlab with conda install jupyterlab Install pandas: conda install pandas Install tabulate: conda install tabulate Now it is time to integrate GAMS into your new Python environment:\ncd c:\\GAMS\\32\\apifiles\\Python\\api_38 python setup.py build -b %TEMP%\\build install Create a directory for your Jupyter notebooks (I will use C:\\Users\\manns\\Documents\\GAMS\\Jupyter here), and then cd into the directory: cd C:\\Users\\manns\\Documents\\GAMS\\Jupyter.\nNow you can start Jupyter with jupyter notebook. Once the notebook is loaded in your webbrowser, create a new Python notebook with New \u0026gt; Python3.\nLinux We assume you have an up-to-date python installation.\nInstall pip if you have not done so before: sudo apt-get install -y python3-pip Install the venv module, which will enable us to create an isolated GAMS environment: sudo apt-get install -y python3-venv Create a directory to store your python environments in: mkdir environments cd environments Create a dedicated GAMS environment: python3 -m venv gmsjupyter Activate the environment: source gmsjupyter/bin/activate Install Jupyterlab: pip3 install jupyterlab Install pandas: pip3 install pandas Install tabulate: pip3 install tabulate Now it is time to integrate GAMS into your new Python environment:\ncd ~/gams/gams32.1_linux_x64_64_sfx/apifiles/Python/api_38$ python setup.py build -b %TEMP%/build install Create a directory for your Jupyter notebooks (I will use ~/gams/jupyter here), and then cd into the directory.\nNow you can start Jupyter with jupyter notebook. Once the notebook is loaded in your webbrowser, create a new Python notebook with New \u0026gt; Python3. The examples below have been done under Windows, but should run the same under linux.\nSetting up your jupyter notebook for GAMS The GAMS Jupyter Notebook builds on top of the Python 3 kernel. So by default the notebook cells are Python cells. Cells can be turned into GAMS cells, i.e. cells with GAMS syntax, using the Jupyter magic facility (first line in a cell is %%gams). GAMS magic commands enable GAMS support in Python Jupyter notebooks. Beside running GAMS code, it is possible to transfer data between GAMS and Python. In order to enable the GAMS magic commands, it is required to load the extension gams_magic:\n%load_ext gams_magic There are a few other useful commands in connection with running GAMS in a Jupyter notebook. Some transformation functions for pandas dataframes useful for exchange with GAMS have been collected in the notebook DataTransform.ipynb. To use these, download DataTransform.ipynb and copy the file into your working directory.\nThe next cell will execute that notebook and make such data transformation functions, e.g. gt_from2dim (see below) available in this notebook. %%capture captures the output from the execution of the notebook and does not clutter your output.\n%%capture %run DataTransform.ipynb One output from a cell is sometimes not enough, e.g. if you want to display a couple of tables. The display function allows you to do this but needs to imported. As an example, we display a Python list:\nfrom IPython.display import display display([1,2,3]) [1, 2, 3] Running GAMS code Running GAMS code can be done by using either %gams (line magic) or %%gams (cell magic). While %gams can be used for running a single line of GAMS code, %%gams makes the whole cell a GAMS cell.\n%gams set i; %%gams set j; parameter p(i,j); parameter p2(i,j); The GAMS compiler and execution system has been adjusted so one can run a GAMS cell multiple times, even if it contains a declaration or an equation definition, which is normally not possible in the GAMS system. The execution of the next two cells does not create a problem, which mimics the execution, modification, and reexecution of a cell.\n%%gams set i / peter,paul,mary /, j / A,B,C /; parameter p2(i,j) / set.i.set.j 1 /; %%gams set i / i1*i5 /, j /j1*j5 /; parameter p2(i,j) / set.i.set.j 1 /; You won\u0026rsquo;t see any output from a GAMS cell (unless there is a solve executed in the cell, see below). All output goes to the log and lst file. If you really need to see this you can use magic command %gams_log and %gams_lst to display the content of the log and listing file of the most recent GAMS execution. The next cell displays the content of listing file of the last run GAMS cell or line magic. The -e only display the section of the listing file associated with the execution:\n%gams display p2; %gams_lst -e E x e c u t i o n ---- 20 PARAMETER p2 j1 j2 j3 j4 j5 i1 1.000 1.000 1.000 1.000 1.000 i2 1.000 1.000 1.000 1.000 1.000 i3 1.000 1.000 1.000 1.000 1.000 i4 1.000 1.000 1.000 1.000 1.000 i5 1.000 1.000 1.000 1.000 1.000 When things go wrong There is a limit to the execution, modification, and reexecution of GAMS cells. If the type or the dimensionality of a symbol changes, you will need to execute the notebook from scratch and do a controlled reset of the entire GAMS database via %gams_reset. For example, since we declared parameter p2 already over (i,j) we cannot change our mind and redeclare p2 as parameter p2(i,i,j):\nThis will give you a compilation error and an exception in the cell execution:\n%gams parameter p2(i,i,j); --------------------------------------------------------------------------- GamsExceptionExecution Traceback (most recent call last) \u0026lt;ipython-input-9-49112f96e0c3\u0026gt; in \u0026lt;module\u0026gt; ----\u0026gt; 1 get_ipython().run_line_magic('gams', 'parameter p2(i,i,j);') ~\\miniconda3\\envs\\gmsjupyter\\lib\\site-packages\\IPython\\core\\interactiveshell.py in run_line_magic(self, magic_name, line, _stack_depth) 2324 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals 2325 with self.builtin_trap: -\u0026gt; 2326 result = fn(*args, **kwargs) 2327 return result 2328 \u0026lt;decorator-gen-133\u0026gt; in gams(self, line, cell) ~\\miniconda3\\envs\\gmsjupyter\\lib\\site-packages\\IPython\\core\\magic.py in \u0026lt;lambda\u0026gt;(f, *a, **k) 185 # but it's overkill for just that one bit of state. 186 def magic_deco(arg): --\u0026gt; 187 call = lambda f, *a, **k: f(*a, **k) 188 189 if callable(arg): ~\\miniconda3\\envs\\gmsjupyter\\lib\\site-packages\\gams_magic\\gams_magic.py in gams(self, line, cell) 451 opt.traceopt = 3 452 with open(jobName + \u0026quot;.log\u0026quot;, \u0026quot;w\u0026quot;) as logFile: --\u0026gt; 453 self.job.run(opt, checkpoint=self.cp, output=logFile) 454 solveSummary = self.parseTraceFile(trcFilePath) 455 ~\\miniconda3\\envs\\gmsjupyter\\lib\\site-packages\\gams\\execution.py in run(self, gams_options, checkpoint, output, create_out_db, databases) 905 raise gams.workspace.GamsExceptionExecution(\u0026quot;GAMS return code not 0 (\u0026quot; + str(exitcode) + \u0026quot;), set the debug flag of the GamsWorkspace constructor to DebugLevel.KeepFiles or higher or define a working_directory to receive a listing file with more details\u0026quot;, exitcode) 906 else: --\u0026gt; 907 raise gams.workspace.GamsExceptionExecution(\u0026quot;GAMS return code not 0 (\u0026quot; + str(exitcode) + \u0026quot;), check \u0026quot; + self._workspace._working_directory + os.path.sep + tmp_opt.output + \u0026quot; for more details\u0026quot;, exitcode) 908 self._p = None 909 GamsExceptionExecution: GAMS return code not 0 (2), check C:\\Users\\manns\\Documents\\GAMS\\Jupyter\\gamsJupyter7.lst for more details To find out what went wrong, we can use %gams_lst, which tells us that we tried to redefine the domain list:\n%gams_lst GAMS 32.1.0 r75a5b5d Released Jul 31, 2020 WEX-WEI x86 64bit/MS Windows - 08/21/20 12:39:07 Page 9 G e n e r a l A l g e b r a i c M o d e l i n g S y s t e m C o m p i l a t i o n 23 parameter p2(i,i,j); **** $184,184 **** LINE 3 INPUT C:\\Users\\manns\\Documents\\GAMS\\Jupyter\\gamsJupyter7.gms **** 184 Domain list redefined **** 2 ERROR(S) 0 WARNING(S) COMPILATION TIME = 0.000 SECONDS 3 MB 32.1.0 r75a5b5d WEX-WEI USER: GAMS Evaluation License S200819/0001CO-GEN GAMS Software GmbH, Frechen Office DCE839 **** FILE SUMMARY Restart C:\\Users\\manns\\Documents\\GAMS\\Jupyter\\_gams_py_gcp0.g00 Input C:\\Users\\manns\\Documents\\GAMS\\Jupyter\\gamsJupyter7.gms Output C:\\Users\\manns\\Documents\\GAMS\\Jupyter\\gamsJupyter7.lst Save C:\\Users\\manns\\Documents\\GAMS\\Jupyter\\_gams_py_gcp6.g0? **** USER ERROR(S) ENCOUNTERED With a %gams_reset we can reset the GAMS database and can declare symbols with a different type and domain/dimension. All other things in the GAMS database are gone, too. So we need to redeclare the sets i and j, too.\n%gams_reset %gams set i,j; parameter p(i,j), p2(i,i,j); Pushing Data from Python to GAMS %gams_push transfers data from Python to GAMS. Supported data types for pushing data are lists, pandas.DataFrame and numpy arrays:\n# Define Python lists with data i = [\u0026#39;i1\u0026#39;, \u0026#39;i2\u0026#39;, \u0026#39;i3\u0026#39;] j = [\u0026#39;j1\u0026#39;, \u0026#39;j2\u0026#39;] p = [(\u0026#39;i1\u0026#39;, \u0026#39;j1\u0026#39;, 1.1), (\u0026#39;i1\u0026#39;, \u0026#39;j2\u0026#39;, 2.2), (\u0026#39;i2\u0026#39;, \u0026#39;j1\u0026#39;, 3.3), (\u0026#39;i2\u0026#39;,\u0026#39;j2\u0026#39;, 4.4), (\u0026#39;i3\u0026#39;,\u0026#39;j1\u0026#39;, 5.5), (\u0026#39;i3\u0026#39;, \u0026#39;j2\u0026#39;, 6.6)] %gams_push i j p As mentioned above the execution of a %%gams cell or %gams and %gams_push line magic does not produce output. If one wants to verify that the data ended up in GAMS we can display the symbols in GAMS and output the corresponding part of the listing file:\n%gams display i,j,p; %gams_lst -e E x e c u t i o n ---- 15 SET i i1, i2, i3 ---- 15 SET j j1, j2 ---- 15 PARAMETER p j1 j2 i1 1.100 2.200 i2 3.300 4.400 i3 5.500 6.600 The next cell turns a Python list into a pandas.DataFrame, multiplies the value by 2 and displays the dataframe with IPythons\u0026rsquo;s display. We actually display the transformed p2 (via function gt_pivot2d found in the DataTransformation notebook run at the top of the notebook), so the table looks nicer. Next, we sends the pandas.DataFrame down to GAMS via the %gams_push command. Via the GAMS display and the output of the relevant part of the listing file we see that the %gams_push succeeded:\nimport pandas as pd # turn the Python list p into a pandas.Dataframe p2 and send this down to GAMS pp = pd.DataFrame(p) # multiply the value by 2: pp[2] = 2*pp[2] # display a nicer version of the dataframe: display(gt_pivot2d(pp)) %gams parameter pp(i,j) %gams_push pp %gams display pp; %gams_lst -e j1 j2 i1 2.2 4.4 i2 6.6 8.8 i3 11.0 13.2 E x e c u t i o n ---- 25 PARAMETER pp j1 j2 i1 2.200 4.400 i2 6.600 8.800 i3 11.000 13.200 When using numpy arrays in order to push data into GAMS, the data is assumed to be dense. The corresponding sets are defined automatically from 1..n, 1..m, etc depending on the data that is pushed.\nimport numpy as np data = [[[1.1,-1.1], [2.2,-2.2]], [[3.3,-3.3], [4.4,-4.4]], [[5.5,-5.5], [6.6,-6.6]]] p3 = np.array(data) %gams set i, j, k; parameter p3(i,j,k); %gams_push p3 %gams display i,j,k,p3; %gams_lst -e E x e c u t i o n ---- 34 SET i 1, 2, 3 ---- 34 SET j 1, 2 ---- 34 SET k 1, 2 ---- 34 PARAMETER p3 3-dim Matrix 1 2 1.1 1.100 -1.100 1.2 2.200 -2.200 2.1 3.300 -3.300 2.2 4.400 -4.400 3.1 5.500 -5.500 3.2 6.600 -6.600 Pulling Data from GAMS to Python The line magic %gams_pull transfers data from GAMS to Python in different formats. Supported formats are lists (default), pandas.DataFrame and numpy arrays. The following example pulls the sets i, j, and parameter p3 from GAMS into lists. For multi-dimensional symbols the records become Python tuples. Currently, the renaming functionality %gams_pull gamsSym=pySymbol is not yet supported.\n%gams_pull p3 i j display(i,j,p3) ['1', '2', '3'] ['1', '2'] [('1', '1', '1', 1.1), ('1', '1', '2', -1.1), ('1', '2', '1', 2.2), ('1', '2', '2', -2.2), ('2', '1', '1', 3.3), ('2', '1', '2', -3.3), ('2', '2', '1', 4.4), ('2', '2', '2', -4.4), ('3', '1', '1', 5.5), ('3', '1', '2', -5.5), ('3', '2', '1', 6.6), ('3', '2', '2', -6.6)] The switch -d will populate pandas.DataFrames instead of lists with the GAMS data. The dataframes that are pushed into or pulled from GAMS have a very specific layout. There is a record index and the GAMS domains show up as columns in the dataframe. For parameters, there is an extra value column. For variables and equations we find extra columns level, marginal, lower, upper, and scale. The method head() used in the IPython display provides only the first 5 records of a pandas.DataFrame:\n%gams variable x(i) / 1.L 1, 2.M 3 /; %gams_pull -d i j p3 x display(i, j, p3.head(), x) i 0 1 1 2 2 3 j 0 1 1 2 i j k value 0 1 1 1 1.1 1 1 1 2 -1.1 2 1 2 1 2.2 3 1 2 2 -2.2 4 2 1 1 3.3 i level marginal lower upper scale 0 1 1.0 0.0 -inf inf 1.0 1 2 0.0 3.0 -inf inf 1.0 The data transformation functions available from DataTransformations.ipynb help to convert between this format and formats more suitable for display of other transformations in Python. The following lines give a quick overview of the transformation functionality:\n%gams parameter r(i,j); r(i,j) = uniformInt(1,10); %gams_pull -d r display(r,gt_pivot2d(r),gt_from2dim(gt_pivot2d(r),[\u0026#39;i\u0026#39;,\u0026#39;j\u0026#39;,\u0026#39;value\u0026#39;])) i j value 0 1 1 2.0 1 1 2 9.0 2 2 1 6.0 3 2 2 4.0 4 3 1 3.0 5 3 2 3.0 1 2 1 2.0 9.0 2 6.0 4.0 3 3.0 3.0 i j value 0 1 1 2.0 1 1 2 9.0 2 2 1 6.0 3 2 2 4.0 4 3 1 3.0 5 3 2 3.0 The switch -n will populate numpy arrays instead of lists with the GAMS parameters. This format works with parameters only! The GAMS data will be dropped into a dense numpy array:\n%gams parameter p4(i,j) / 1.1 1, 2.2 2 /; %gams_pull -n p4 display(p4) array([[1., 0.], [0., 2.], [0., 0.]]) Troubleshooting and Hints Paths to notebooks must not contain whitespaces. A notebook file itself (*.ipynb) can. The temporary files created in your working directory are useful for debugging, see below. The naming of the temporary files is not very sophisticated, so it can come to file nameing conflicts if you run two notebooks in the same directory at the same time (in different browser tabs). Create subdirectories and move the notebook into the subdirectories if you run into this problem. As soon as an error occurs while running GAMS code (the notebook exception is a GamsExecption), it can be useful to examine the listing file (*.lst) using %gams_lst. The path of the listing file can be found in the last line of the output of a failing cell. %gams Parameter pt(i,j,l) --------------------------------------------------------------------------- GamsExceptionExecution Traceback (most recent call last) \u0026lt;ipython-input-25-4c01a25a5766\u0026gt; in \u0026lt;module\u0026gt; ----\u0026gt; 1 get_ipython().run_line_magic('gams', 'Parameter pt(i,j,l)') ~\\miniconda3\\envs\\gmsjupyter\\lib\\site-packages\\IPython\\core\\interactiveshell.py in run_line_magic(self, magic_name, line, _stack_depth) 2324 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals 2325 with self.builtin_trap: -\u0026gt; 2326 result = fn(*args, **kwargs) 2327 return result 2328 \u0026lt;decorator-gen-133\u0026gt; in gams(self, line, cell) ~\\miniconda3\\envs\\gmsjupyter\\lib\\site-packages\\IPython\\core\\magic.py in \u0026lt;lambda\u0026gt;(f, *a, **k) 185 # but it's overkill for just that one bit of state. 186 def magic_deco(arg): --\u0026gt; 187 call = lambda f, *a, **k: f(*a, **k) 188 189 if callable(arg): ~\\miniconda3\\envs\\gmsjupyter\\lib\\site-packages\\gams_magic\\gams_magic.py in gams(self, line, cell) 451 opt.traceopt = 3 452 with open(jobName + \u0026quot;.log\u0026quot;, \u0026quot;w\u0026quot;) as logFile: --\u0026gt; 453 self.job.run(opt, checkpoint=self.cp, output=logFile) 454 solveSummary = self.parseTraceFile(trcFilePath) 455 ~\\miniconda3\\envs\\gmsjupyter\\lib\\site-packages\\gams\\execution.py in run(self, gams_options, checkpoint, output, create_out_db, databases) 905 raise gams.workspace.GamsExceptionExecution(\u0026quot;GAMS return code not 0 (\u0026quot; + str(exitcode) + \u0026quot;), set the debug flag of the GamsWorkspace constructor to DebugLevel.KeepFiles or higher or define a working_directory to receive a listing file with more details\u0026quot;, exitcode) 906 else: --\u0026gt; 907 raise gams.workspace.GamsExceptionExecution(\u0026quot;GAMS return code not 0 (\u0026quot; + str(exitcode) + \u0026quot;), check \u0026quot; + self._workspace._working_directory + os.path.sep + tmp_opt.output + \u0026quot; for more details\u0026quot;, exitcode) 908 self._p = None 909 GamsExceptionExecution: GAMS return code not 0 (2), check C:\\Users\\manns\\Documents\\GAMS\\Jupyter\\gamsJupyter19.lst for more details %gams_lst GAMS 32.1.0 r75a5b5d Released Jul 31, 2020 WEX-WEI x86 64bit/MS Windows - 08/21/20 12:54:55 Page 28 G e n e r a l A l g e b r a i c M o d e l i n g S y s t e m C o m p i l a t i o n 49 Parameter pt(i,j,l) **** $120 **** LINE 3 INPUT C:\\Users\\manns\\Documents\\GAMS\\Jupyter\\gamsJupyter19.gms **** 120 Unknown identifier entered as set **** 1 ERROR(S) 0 WARNING(S) COMPILATION TIME = 0.000 SECONDS 3 MB 32.1.0 r75a5b5d WEX-WEI USER: GAMS Evaluation License S200819/0001CO-GEN GAMS Software GmbH, Frechen Office DCE839 **** FILE SUMMARY Restart C:\\Users\\manns\\Documents\\GAMS\\Jupyter\\_gams_py_gcp0.g00 Input C:\\Users\\manns\\Documents\\GAMS\\Jupyter\\gamsJupyter19.gms Output C:\\Users\\manns\\Documents\\GAMS\\Jupyter\\gamsJupyter19.lst Save C:\\Users\\manns\\Documents\\GAMS\\Jupyter\\_gams_py_gcp16.g0? **** USER ERROR(S) ENCOUNTERED ","excerpt":"We now provide integration of GAMS in Jupyter Notebook. This post summarizes how to get started.","ref":"/blog/2020/08/how-to-use-gams-in-jupyter-notebooks/","title":"How to use GAMS in Jupyter Notebooks"},{"body":"","excerpt":"","ref":"/authors/bmccarl/","title":"Bruce McCarl"},{"body":"Optimization problems occasionally yield unbounded solutions. To find the cause one can modify the model and solve it to gain information. This is done through the imposition of “artificially” large bounds. Linear programming solvers discover unboundedness when they find a variable which is attractive to make larger, but find that the variable may be increased without limit. In GAMS some solvers return such information but typically only one unbounded variable will be reported, if any and there may be numerous other variables which have not been examined and could be unbounded. Unfortunately, the LST file does not generally give enough information to diagnose and fix the cause of the unboundedness and pre-solves rarely find such problems. Commonly, the solution report contains an instance where a particular item is tagged as unbounded (with the marker UNBND), but there will also be other variables marked as non-optimal (NOPT) which may or may not be unbounded. Finally, note that use of GAMSCHK ANALYSIS and correction of all identified problems and models still can be bounded. Thus, most modelers will occasionally contend with models that are unbounded and will need to discover what is causing that condition.\nWhat causes an Unbounded Model Causes of unbounded models are not always easily identified. Solvers may report a particular variable as unbounded when in reality a different variable or interactions between variables is the real cause. Consider the following example:\nMax 3X1 - X2 + X3 s.t. X1 - X2 = 0 X3 ≤ 20 X1 , X2 , X3 ≥ 0 Here the unboundedness is caused by the interrelationship between $X_{1}$ and $X_{2}$. There may be several potential explanations as to why the unboundedness is present. The profit contribution from $X_{1}$ and $X_{2}$ may not be volume independent and some form of diminishing revenue or increasing cost as sales increase may be omitted. Second, there may be omitted constraints on $X_{1}$ or $X_{2}$. Third, there may be multiple errors involving the above cases. A run with CPLEX resulted in the marking of $X_{2}$ as the unbounded item. This may or may not be a proper identification of the problem causing mistake. The mistake may be on the $X_{1}$ side and we don’t see anything about that in the output. There is a general point here usually unboundedness occurs because of the interaction of multiple variables and constraints, not just the one variable that the solver happens to mark. In a more complex model, potentially a set of 50 variables and constraints could be involved. Thus, we need to find the involved set of variables and equations and then look for the root cause of the unboundedness. How then does one go about discovering this? Again, model modifications may be necessary.\nFinding Causes of Unboundedness \u0026ndash; Basic Theory The obvious solution to an unbounded model is to bound it. Thus, we bound variables we think are potentially unbounded so they are less than or equal to some very large number like 1010. The consequent model will be bounded, but the solution may have variable some quite large valued variables. In this case we will bound the $X_{1}$ and $X_{3}$ variables since both contribute revenue to the objective function.\nMax 3X1 -X2 +X3 s.t. X1 -X2 = 0 X3 ≤ 20 X1 ≤ 1000000000 X3 ≤ 1000000000 X1 , X2 , X3 ≥ 0 Note we are making the problem “artificially” bounded. If it is truly unbounded, then we should expect that the solution will show $X_{1}$ and $X_{2}$ taking on large values which are far larger than any anticipated “non artificial” value. However, when unboundedness is not present the large upper bound constraints should be redundant with no effect on the solution although in solvers using dual simplex they can slow down convergence substantially. The resultant solution is\nObjective : 20000000040.000000 LOWER LEVEL UPPER MARGINAL ---- EQU obj . . . 1.0000 ---- EQU r1 -INF . . 1.0000 ---- EQU r2 -INF 20.0000 20.0000 2.0000 LOWER LEVEL UPPER MARGINAL ---- VAR objmax -INF 2.000000E+10 +INF . ---- VAR x1 . 1.000000E+10 1.000000E+10 2.0000 ---- VAR x2 . 1.000000E+10 +INF . ---- VAR x3 . 20.0000 1.000000E+10 . This solution tells us what is wrong through the variable levels. The levels for both $X_{1}$ and $X_{2}$ are distorted while $X_{3}$ is unaffected. Thus, the modeler would receive signals that the unboundedness involves the interaction of the $X_{1}$ and $X_{2}$. In turn, one would examine these variables and any binding equations relating them to fix the unboundedness. The above material indicates a way of finding the cause of unboundedness. Namely, set up the model with large bounds present, solve it and look for distorted (large) levels to find the causal set of variables and equations. One word of caution, this will always identify some of the unboundedness causes, but in the face of a non-unique primal solution caused by degeneracy or alternative optimals may not reveal them all. Thus, multiple applications of the procedure may be needed.\nDetails on Large Bound Approach to Resolving Unboundedness The following gives the steps for finding unboundedness causes.\nStep 1\tIdentify the relevant variables for which artificially large bounds need to be added.\nStep 2\tAdd bounds to those variables.\nStep 3\tSolve the model.\nStep 4\tExamine the model solution. When variable and equation solution levels are found which are excessively large, identify those as the variables and equations to be examined for the cause of infeasibility.\nStep 5\tFix the model and repeat the process if needed. There are several questions inherent in the above procedure. In particular, which items need bounds? What type of bounds should be entered? How does one find an excessively large level? Each is discussed below.\nWhere Do We Add Large Bounds? The places where bounds are required can be determined in several ways. One could look at the model solution and just add bounds on the variables marked by the solver as unbounded or non-optimal. However, while this rather readily points to proper places in the example model, it does not always do such. One approach that can be used is to add bounds to all potentially unbounded variables. Linear programming models are unbounded when the solver finds the objective function can be improved by altering the value of a variable, but finds that variable is not limited by a constraint. Thus, to identify all potentially unbounded variables then one has to find all variables that contribute to the objective function, but are not directly bounded. Such cases in a maximization context involve\na)\tnon-negative variables with positive objective coefficients and no upper bound b)\tnon-positive variables with negative objective coefficients and no lower bound c)\tUnrestricted or free variables with positive objective coefficients and no upper bound d)\tunrestricted or free variables with negative objective coefficients and no lower bound\nThese cases identify a larger than necessary set since the restrictions imposed by the constraint set are not considered. However, more complex tests would be needed to factor in those constraints. The ADVISORY and NONOPT procedures in GAMSCHK have been written to create a list of all occurrences of these cases. Use of ADVISORY does not require the model be solved or while NONOPT in IDENTIFY mode only work after a model solve.\nGAMS permits an alternative technique for bounding the problem. Namely, one can go provide a large upper bound on the variable to be maximized or if the problem is a minimization problem, a large negative lower bound.\nHow Do I Find Distorted Levels? The next question involves finding the distorted levels. The simple aspect of this is that one can simple review the output find variable levels with large exponents. The more complex aspect is that in a model with thousands of variables and equations this information can be well hidden. The GAMSCHK NONOPT procedure has been written to help in this quest. All items with levels in an optimal solution that are larger in absolute value than 10 to a filter value, are output as potential causes of the unboundedness.\nComparing the Bounding Techniques As mentioned above there are two bounding approaches that can be used\nbound multiple individual variables which contribute to the desirability of the objective function ( for example, those that are profitable in a maximization problem) bound the single variable which is being optimized in the problem. There can be substantial differences in information generated by these two techniques. The most distinguishing characteristics involve simplicity of use and completeness of information.\nSimplicity of use - When one simply bounds the variable being optimized one adds a single bound without having to think through which variables are desirable to the objective function and then add multiple bound statements on those. Completeness of information Simplicity has its costs, as information content in the solution is generally less under the simpler bound technique. Namely, when a unbounded model is solved and there is more than one set of variables causing the unboundedness, the use of the single bound will only reveal one unbounded case at a time. NLPs, MIPs and Unboundedness When dealing with unbounded cases in mixed integer programs or nonlinear programs the approach is essentially that above. However, in the nonlinear programming case one also has to consider has two additional issues: objective function form and solver numerical properties.\nIn terms of objective function form, nonlinear programming theory requires a concave objective function for the attainment of global optimality in maximization problems and a convex objective function in the case of minimization problems. When a nonlinear programming model is judged unbounded, then one should investigate the objective function convexity/concavity characteristics. When a nonlinear programming model is unbounded, one can be running into numerical problems. In particular, issues such scaling, starting points, tolerances and other numerical issues can be the problem. The bounding technique above has been shown in the authors work. To be useful but on occasions has been subject to numerical problems which needed to be resolved before proceeding. from Bruce McCarl\u0026rsquo;s GAMS Newsletter No 45 , July 2020\nArchive of all Newsletters ","excerpt":"\u003cp\u003eOptimization problems occasionally yield unbounded solutions. To find the cause one can modify the model and solve it to gain information. This is done through the imposition of “artificially” large bounds. Linear programming solvers discover unboundedness when they find a variable which is attractive to make larger, but find that the variable may be increased without limit. In GAMS some solvers return such information but typically only one unbounded variable will be reported, if any and there may be numerous other variables which have not been examined and could be unbounded.\nUnfortunately, the LST file does not generally give enough information to diagnose and fix the cause of the unboundedness and pre-solves rarely find such problems. Commonly, the solution report contains an instance where a particular item is tagged as unbounded (with the marker UNBND), but there will also be other variables marked as non-optimal (NOPT) which may or may not be unbounded.\nFinally, note that use of GAMSCHK ANALYSIS and correction of all identified problems and models still can be bounded. Thus, most modelers will occasionally contend with models that are unbounded and will need to discover what is causing that condition.\u003c/p\u003e","ref":"/blog/2020/07/dealing-with-models-that-are-unbounded/","title":"Dealing with Models that are Unbounded"},{"body":"","excerpt":"","ref":"/categories/mccarl-newsletter/","title":"McCarl Newsletter"},{"body":"Due to the ongoing COVID-19 outbreak, the 77th Semi-annual ETSAP workshop was held virtually from 2nd-3rd July 2020. The ETSAP Executive Committee meeting was also held virtually on 2nd July 2020.\nOur Fred Fiand had a joint presentation at the meeting on Thursday, July 2nd, at 16:45 - 17:00:\nAn open-source TIMES/MIRO app\nMr. Frederik Fiand, GAMS, Dr. Evangelos Panos, PSI, Gary Goldstein, DWG\nYou can download the presentation slides .\nThe video of the presentation is available on YouTube: https://www.youtube.com/watch?v=RbavrEXVzvI\u0026feature=youtu.be The work presented by Fred was only possible with the recently released MIRO 1.1.\nRead more about the TIMES/MIRO app and the new MIRO features below:\nTIMES is a sophisticated energy model generator implemented in GAMS. It has been published under an open source license in 2019. Recently, GAMS joined forces with experienced TIMES users to develop an open-source TIMES/MIRO App that serves as a frontend to the TIMES model.\nA key feature of the new MIRO release is the “data cube concept” that allows browsing input and output data with a powerful new MIRO pivot table. This pivot table is designed to work with large data sets (\u0026gt; 1 million records) and provides an intuitive interface to explore your data interactively.\nYou want to filter, aggregate or pivot your data? Give the pivot renderer of GAMS MIRO 1.1 a try! When developing this new renderer, we were inspired by the GDX viewer of the old GAMS IDE that most of our customers are familiar with. We wanted to incorporate many of the features, like drag and drop support as well as filtering of domains. On top of that we added many new features that we believe will make working with your data even more efficient. You can filter domains by one or multiple elements and even aggregate domains by one of several aggregation functions (sum, mean, count etc.). Furthermore, the MIRO pivot table seamlessly integrates with charting facilities to plot line charts or bar charts. For the TIMES/MIRO app we also added experimental support to store and load pivot table configurations - so called “views”. So far, this feature is not officially supported and we do not recommend using it in production. However, it gives you an idea of what’s to come in GAMS MIRO 1.2!\nIn addition to the all new pivot renderer, the TIMES/MIRO app integrates an option to offload the actual solving to the NEOS Server which makes it particularly useful for TIMES users operating on a tight budget.\nTIMES Models can become huge and require large amounts of data. The TIMES/MIRO App has proven its efficiency in this regards and makes handling large amounts of data (e.g. for the TIMES-DK_COMETS model) a piece of cake.\nTry the TIMES/MIRO app for yourself in our GAMS MIRO gallery !\n","excerpt":"TIMES is a sophisticated energy model generator implemented in GAMS. It has been published under an open source license in 2019. Recently, GAMS joined forces with experienced TIMES users to develop an open-source TIMES/MIRO App that serves as a frontend to the TIMES model. This was made possible by new features in GAMS MIRO 1.1","ref":"/blog/2020/07/an-open-source-times/miro-app-introducing-miro-1.1.0/","title":"An open source TIMES/MIRO app - introducing MIRO 1.1.0"},{"body":"We have developed a new interface between JuMP and GAMS, and improved the existing interface between Pyomo and GAMS.\nBackground Using GDX is currently the most efficient way to exchange data (like variable and equations solution records) with GAMS. Many (mostly academic) GAMS users also use other modeling tools, such as JuMP (Julia) or Pyomo (Python). Both of these have gained quite some traction in recent years, and interfacing them with GAMS in an efficient way is something that has been requested by some people. Our developers have therefore spent some time utilizing GDX to interface GAMS with Pyomo and JuMP as efficiently as possible.\nJuMP For a relatively new programming language, Julia has gained a lot of attention, because of its straightforward syntax that makes algebraic expressions easy, and for its execution speed that approaches that of compiled languages. JuMP is a Julia package for mathematical modeling and supports a range of different solvers. However, many users want to use solvers that are not supported by JuMP, but come with GAMS. Currently these are: AlphaECP, Antigone, Conopt, DICOPT, GLOMIQO, LGO, LINDO, LINDOGLOBAL, Localsolver, MINOS, MSNLP, ODHCPLEX, PATHNLP, SBB, SHOT, SNOPT, SoPLEX, XA and our own development BDMLP. We have developed a new interface (https://github.com/GAMS-dev/gams.jl ) that allows using GAMS as a JuMP solver:\nusing GAMS using JuMP model = Model(GAMS.Optimizer) I = 1:2 J = 1:3 a = [350, 600] b = [325, 300, 275] d = [2.5 1.7 1.8; 2.5 1.8 1.4] @variable(model, x[I,J] \u0026gt;= 0) @objective(model, Min, 0.09 * sum(d[i,j] * x[i,j] for i in I, j in J)) @constraint(model, [i in I], sum(x[i,j] for j in J) \u0026lt;= a[i]) @constraint(model, [j in J], sum(x[i,j] for i in I) \u0026gt;= b[j]) JuMP.optimize!(model) println(\u0026#34;Optimal Solution: \u0026#34;, JuMP.objective_value(model)) The new interface supports the following JuMP features:\nlinear, quadratic and nonlinear (convex and non-convex) objective and constraints continuous, binary, integer, semi-continuous and semi-integer variables SOS1 and SOS2 sets Pyomo Pyomo allows formulating, solving, and analyzing optimization models in Python. Pyomo has already had a GAMS interface . We have recently contributed to the Pyomo project to improve the performance of the interface by using GDX.\nUse GAMS as Pyomo solver:\nfrom pyomo.environ import * opt = SolverFactory(\u0026#39;gams\u0026#39;) model = ConcreteModel() model.i = RangeSet(0,1) model.j = RangeSet(0,2) a = [350, 600] b = [325, 300, 275] d = [[2.5, 1.7, 1.8], [2.5, 1.8, 1.4]] model.x = Var(model.i, model.j, domain=NonNegativeReals) model.obj = Objective(expr=0.09 * sum(d[i][j] * model.x[i,j] for i in model.i for j in model.j)) def c1_rule(model, i): return sum(model.x[i,j] for j in model.j) \u0026lt;= a[i] model.c1 = Constraint(model.i, rule=c1_rule) def c2_rule(model, j): return sum(model.x[i,j] for i in model.i) \u0026gt;= b[j] model.c2 = Constraint(model.j, rule=c2_rule) results = opt.solve(model) print(\u0026#39;Optimal Solution: \u0026#39;, value(model.obj)) As an added bonus you can use our GAMS convert tool to convert GAMS instances (.gms) to Pyomo models (.py)\nHow does this work? We followed the same principles in both interfaces:\nWhen solving a model with the GAMS solver (or MathOptInterface Optimizer), a GMS text file with a scalar model (similar to /latest/docs/S_CONVERT.html#INDEX_CONVERT_22_scalar_21_model ) is created and executed by a GAMS shell command. The generated model instance is passed to a GAMS solver link and the actual solver starts to work on a solution. When the solver terminates, the solution point and status is exported to a GDX file, which is read using the gdxcc Python module or direct C calls to the GAMS GDX library libgdxdclib64.so (in case of Julia). The upshot With the newly developed / improved link we have an efficient interface. In the case of Pyomo, we can demonstrate significant performance improvements.\nPerformance improvement on MINLPlib.\nComparison of the new GDX based link (gams_shell_new) with the old put-based link (gams_shell) and the direct call of gams via the python API (gams_direct). Theoretical best and worst case scenarios are included as virt. best and virt. worst, respectively.\n","excerpt":"We\u0026rsquo;ve got some new links from Pyomo and JuMP to GAMS, which allow efficient interfacing of GAMS with your Pyomo or JuMP models.","ref":"/blog/2020/06/new-and-improved-gams-links-for-pyomo-and-jump/","title":"New and improved GAMS links for Pyomo and JuMP"},{"body":"GAMS MIRO Engine is a new, highly scalable version of GAMS that is tailored to cloud environments. MIRO Engine moves the heavy computation away from local PCs to servers and takes care of distributing your optimization jobs. It scales up or down depending on the workload. With the built-in user management system, you can limit user activities to reflect your organizational hierarchy.\nWe want to introduce the beta version of GAMS MIRO Engine and demonstrate the following topics:\nBoosting MIRO by using Engine as the back-end\nPosting jobs and getting results using the Engine User Interface\nUsing Python to post jobs\nLimiting the activities of your users through the use of namespaces\nPosting MIRO Hypercube jobs\nWatch our webinar and learn about the Features of the new GAMS MIRO Engine!\nSpeaker Hamdi Burak Usul\nGAMS Software GmbH\nH. Burak Usul is a senior year Computer Engineering student from Abdullah Gul University. In addition to his CS studies, Burak also took lessons from Industrial Engineering about mathematical modeling, optimization, and artificial intelligence.\nIn 2019, he joined GAMS as a student intern and continued as an independent contractor. In this role, he contributed significantly to the development of GAMS MIRO Engine. His core competencies are distributed systems and Kubernetes, competitive programming, and mathematical modeling.\n","excerpt":"\u003cp\u003eGAMS MIRO Engine is a new, highly scalable version of GAMS that is tailored to cloud environments. MIRO Engine moves the heavy computation away from local PCs to servers and takes care of distributing your optimization jobs. It scales up or down depending on the workload. With the built-in user management system, you can limit user activities to reflect your organizational hierarchy.\u003c/p\u003e","ref":"/webinars/gams-miro-engine_beta_t/","title":"GAMS MIRO Engine - running GAMS in the cloud!"},{"body":"GAMS MIRO is the new deployment environment for your GAMS models. Turn your models into fully-fledged applications in minutes and adapt them to your specific needs. No programming knowledge required!\nIn this webinar we will start with a basic GAMS model and develop it into an interactive application. We will introduce you to the most important functions of GAMS MIRO, such as the generation of input data, the visualization of results and the management and comparison of scenarios.\nWe start developing our MIRO application from within GAMS Studio and show you how to deploy an application on your local computer or on a server where several people can work together.\nFor whom is MIRO designed? The possible applications of MIRO are manifold. MIRO is intended for everyone who works with optimization models or wants to make decisions based on those.\nBusiness\nMake business decisions based on optimization software without the need for extensive Operations Research or GAMS expertise. Teaching\nGive a general insight into the topic of optimization or illustrate a specific problem in detail. Research\nBenefit from MIRO\u0026rsquo;s advanced scenario and data management system that helps you focus on your research. Watch our webinar and learn how you can get the most out of your optimization!\nSpeaker Frederik Proske\nOperations Research Analyst, GAMS Software GmbH\nFrederik Proske holds a B.Sc. and M.Sc. in Engineering and Business Administration from the University of Hannover, where he also taught students concepts of Operations Research for several years.\nIn 2016 he joined GAMS as Operations Research Analyst. In this role, he is responsible for software development and project management in the area of mathematical programming. His core competencies are projects in the field of operations research - mostly scheduling problems - that provide customers with powerful optimization software.\nSince 2018, he has been the lead engineer for GAMS MIRO, a tool that allows you to automate the use of your GAMS models. He regularly gives lectures at universities and international conferences.\n","excerpt":"We are proud to announce the release candidate for GAMS MIRO 1.0.","ref":"/blog/2020/04/gams-miro-1.0-deploy-your-gams-models-in-minutes/","title":"GAMS MIRO 1.0 - Deploy your GAMS models in minutes!"},{"body":"GAMS MIRO is the new deployment environment for your GAMS models. Turn your models into fully-fledged applications in minutes and adapt them to your specific needs. No programming knowledge required!\nIn this webinar we will start with a basic GAMS model and develop it into an interactive application. We will introduce you to the most important functions of GAMS MIRO, such as the generation of input data, the visualization of results and the management and comparison of scenarios.\nWe start developing our MIRO application from within GAMS Studio and show you how to deploy an application on your local computer or on a server where several people can work together.\nFor whom is MIRO designed? The possible applications of MIRO are manifold. MIRO is intended for everyone who works with optimization models or wants to make decisions based on those.\nBusiness\nMake business decisions based on optimization software without the need for extensive Operations Research or GAMS expertise. Teaching\nGive a general insight into the topic of optimization or illustrate a specific problem in detail. Research\nBenefit from MIRO\u0026rsquo;s advanced scenario and data management system that helps you focus on your research. Watch our webinar and learn how you can get the most out of your optimization!\nSpeaker Frederik Proske\nOperations Research Analyst, GAMS Software GmbH\nFrederik Proske holds a B.Sc. and M.Sc. in Engineering and Business Administration from the University of Hannover, where he also taught students concepts of Operations Research for several years.\nIn 2016 he joined GAMS as Operations Research Analyst. In this role, he is responsible for software development and project management in the area of mathematical programming. His core competencies are projects in the field of operations research - mostly scheduling problems - that provide customers with powerful optimization software.\nSince 2018, he has been the lead engineer for GAMS MIRO, a tool that allows you to automate the use of your GAMS models. He regularly gives lectures at universities and international conferences.\n","excerpt":"We are proud to announce the release candidate for GAMS MIRO 1.0. GAMS MIRO is the new deployment environment for your GAMS models. Turn your models into fully-fledged applications in minutes and adapt them to your specific needs. No programming knowledge required!","ref":"/webinars/gams-miro_release_t/","title":"GAMS MIRO 1.0 - Deploy your GAMS models in minutes!"},{"body":"Sales contact info ","excerpt":"\u003ch1 id=\"sales-contact-info\"\u003eSales contact info\u003c/h1\u003e","ref":"/sales/contact/","title":"Contact Sales"},{"body":"GAMS Extended Support GAMS offers every user with a GAMS license under maintenance excellent and responsive technical support from our skilled experts. However, the scope of this basic support does not always cover every request.\nWhere does basic support end, and where does extended support start? It is not always easy to draw the line between basic and extended support. Some questions that are easy to answer if the underlying model is simple may be impossible to answer if the underlying model is complex. There may also be cases where basic support can identify issues of a model and provide some pointers to the user how the issues could be resolved. However, the actual resolution of the issue may require a deep understanding of the underlying model, such that an extended support contract would be required to handle it in depth by GAMS support. The following list of examples is by no means complete but illustrates different types of services covered by basic or extended support.\nBasic GAMS support helps to overcome technical issues with GAMS. Requests covered by basic support include, for example:\nProblems with the installation of GAMS Problems with a GAMS license file Problems with GAMS related tools Unexpected behavior of GAMS or a solver (GAMS does not do what you think it should do) Problems solving your model (feasibility or performance) Simple modeling questions Services covered by extended GAMS support can, for example, include:\nIdentification and resolution of performance bottlenecks for complex models (GAMS, solvers, data connectors) In-depth analysis and resolution of numerical issues Implementation of custom features (e.g. add new functionality to GAMS or related tools) Deployment: Creating a web application from your model with GAMS MIRO Application development Rates Hourly Hourly rates are charged per 15 minutes and paid upfront and provide the flexibility to use support hours when needed. Hourly support can be purchased in individually sized packages. Please contact sales@gams.com to discuss options.\nProjects In some cases, charging by project makes much more sense than charging by the hour. If you have a well-defined task (e.g. development of an application with well-defined functionality and interfaces), setting up a project has the advantage of a clearly identified budget, and you know exactly what you are getting. Project rates are based on a daily fee and are negotiated on an individual basis. Please contact sales@gams.com to discuss options.\n","excerpt":"\u003ch2 id=\"gams-extended-support\"\u003eGAMS Extended Support\u003c/h2\u003e\n\u003cp\u003eGAMS offers every user with a GAMS license under maintenance excellent and responsive technical support from our skilled experts. However, the scope of this \u003cem\u003ebasic support\u003c/em\u003e does not always cover every request.\u003c/p\u003e\n\u003ch3 id=\"where-does-basic-support-end-and-where-does-extended-support-start\"\u003eWhere does basic support end, and where does extended support start?\u003c/h3\u003e\n\u003cp\u003eIt is not always easy to draw the line between basic and extended support. Some questions that are easy to answer if the underlying model is simple may be impossible to answer if the underlying model is complex. There may also be cases where basic support can identify issues of a model and provide some pointers to the user how the issues could be resolved. However, the actual resolution of the issue may require a deep understanding of the underlying model, such that an extended support contract would be required to handle it in depth by GAMS support.\nThe following list of examples is by no means complete but illustrates different types of services covered by basic or extended support.\u003c/p\u003e","ref":"/sales/extended-support/","title":"Extended Support"},{"body":"Get maintenance Our standard license fee covers a perpetual license to use the software. However, you will not be able to upgrade to future versions of GAMS if your installation is not under maintenance and support.\nMaintenance and support (M\u0026amp;S) is free during the first year after the purchase of the software.\nAfter the first year, the optional annual fee for M\u0026amp;S is 20% of the list price for all licensed modules. Maintenance and support includes:\nfree updates adding components platform switching without additional charge multi-copy discounts on the same platform. Up to four changes per year of the name on a user-based license are included.\nIf the user does not purchase M\u0026amp;S for some period and chooses to purchase it at a later date, we will charge the prevailing annual maintenance and support fees for the periods that were not covered.\n","excerpt":"\u003ch2 id=\"get-maintenance\"\u003eGet maintenance\u003c/h2\u003e\n\u003cp\u003eOur standard license fee covers a perpetual license to use the software. However, you will not be able to upgrade to future versions of GAMS if your installation is not under maintenance and support.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eMaintenance and support (M\u0026amp;S) is free during the first year\u003c/strong\u003e after the purchase of the software.\u003c/p\u003e","ref":"/sales/maintenance/","title":"Get Maintenance"},{"body":"With schools and universities shutting down globally due to the ongoing pandemic (https://en.unesco.org/themes/education-emergencies/coronavirus-school-closures) , some university students using GAMS in their projects have been locked out of computer labs, and have therefore lost access to licensed GAMS installations.\nIn order to quickly help those affected, we will provide free, time-limited GAMS licenses, as requested. If you are a student and need an emergency academic license, please contact sales@gams.com and we will help you out.\nStay safe!\nYour GAMS team\n","excerpt":"With schools and universities shutting down globally due to the ongoing pandemic, some university students using GAMS in their projects have been locked out of computer labs.","ref":"/blog/2020/03/free-time-limited-licenses-for-students-or-academic-staff-during-the-current-covid-19-crisis/","title":"Free: Time-limited licenses for students or academic staff during the current Covid-19 crisis"},{"body":"Energy system optimization modeling has become a key ingredient in transitioning to decarbonized energy supply systems based mostly on renewables. Yet, these systems reveal a growing complexity, e.g., due to the decentralization of infrastructures or an increasing variety of potential technologies capable of balancing energy demand and supply. This renders a reliable application of traditional optimization modeling techniques impossible.\nIn the project UNSEEN, several partners engage in developing model-oriented and algorithmic approaches tailored explicitly for the use of High-Performance Computing (HPC) resources. The prior project BEAM-ME has confirmed the potential of this approach and pointed to further necessities. A core objective in UNSEEN is to profit from methods in AI to speed-up further and facilitate the treatment of large numbers of scenarios in order to cover a larger option space. It will also address the crucial issue of reducing uncertainties when searching for adequate setups of a future energy system in Europe.\nPartners are the Zuse Institute Berlin (ZIB) , Juelich Supercomputing Center (JSC) , GAMS Software GmbH , DLR Institute of Engineering Thermodynamics , DLR Institute of Networked Energy Systems, Institute of Mathematics at TU Berlin.\n","excerpt":"\u003cp\u003eEnergy system optimization modeling has become a key ingredient in transitioning to decarbonized energy supply systems based mostly on renewables. Yet, these systems reveal a growing complexity, e.g., due to the decentralization of infrastructures or an increasing variety of potential technologies capable of balancing energy demand and supply. This renders a reliable application of traditional optimization modeling techniques impossible.\u003c/p\u003e","ref":"/blog/2020/03/unseen-the-successor-of-beam-me/","title":"UNSEEN - The successor of BEAM-ME"},{"body":"With the continuing spread of SARS-CoV-2, the virus causing so much disruption throughout the world, we allow all of our employees to work from home as a precaution. We want to protect our employees, but equally important also make our small contribution to slowing the spread of the virus through our communities.\nAs a software company, we are in a privileged position, because our modern development process is largely cloud-based, and our employees can access their development tools from home.\nTherefore GAMS is fully operational, and we will continue to provide commercial and technical support to our valued customers.\nWorking from home requires the right set of tools on the one hand, but equally important is finding out for yourself how to structure your workday and set boundaries.\nFor those of you who suddenly find themselves in a position that requires organizing a distributed workforce, here is a helpful article we would like to point out:\nhttps://arstechnica.com/staff/2020/03/suddenly-working-at-home-weve-done-it-for-22-years-and-have-advice/ **\nStay safe!\n","excerpt":"GAMS staff is working from home. We continue serving our customers.","ref":"/blog/2020/03/gams-covid19-update/","title":"GAMS COVID19 update"},{"body":"Getting Started with GAMS Follow these four steps and start working with GAMS:\nDownload and install GAMS. Request your GAMS license. Install the license following the installation instructions . Check our GAMS Documentation for detailed information about GAMS and our related solvers. Note that GAMS will not work without a valid license and our free versions have some limitations in model size. We offer various ways to get a valid license with more details on restrictions and validity.\nFree Demo System Get your free demo license now! Follow our instructions on www.gams.com/download .\nCommunity We are very supportive for academic usage of our software.\nEVALUATION LICENSE In order to try GAMS, you may request a 30-day evaluation license for free.\nTo request an evaluation license please contact us under sales@gams.com with the following information:\nName of user: Email address of user: Company/University: Department: Postal address: Solvers: To find our more about the available solvers, please check the following webpage: /latest/docs/S_MAIN.html ","excerpt":"\u003ch1 id=\"getting-started-with-gams\"\u003eGetting Started with GAMS\u003c/h1\u003e\n\u003cp\u003eFollow these four steps and start working with GAMS:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003e\u003ca href=\"/download\" target=\"_blank\"\u003eDownload\u003c/a\u003e\n and install GAMS.\u003c/li\u003e\n\u003cli\u003eRequest your GAMS license.\u003c/li\u003e\n\u003cli\u003eInstall the license following the \u003ca href=\"/latest/docs/UG_MAIN.html#UG_INSTALL\" target=\"_blank\"\u003einstallation instructions\u003c/a\u003e\n.\u003c/li\u003e\n\u003cli\u003eCheck our GAMS Documentation for detailed information about GAMS and our related solvers.\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eNote that GAMS will not work without a valid license and our free versions have some limitations in model size. We offer various ways to get a valid license with more details on restrictions and validity.\u003c/p\u003e","ref":"/products/gams/try_gams/","title":""},{"body":"","excerpt":"","ref":"/products/","title":"Products"},{"body":"A GAMS example This toy problem is presented only to illustrate how GAMS lets you model in a natural way. GAMS can handle much larger and highly complex problems. Only a few of the basic features of GAMS can be highlighted here.\nAlgebraic Description Here is a standard algebraic description of the problem, which is to minimize the cost of shipping goods from 2 plants to 3 markets, subject to supply and demand constraints.\nIndices: $i = $plants\n$j = $markets\nGiven data: $a_{i} = $supply of commodity of plant $i$ (cases)\n$b_{j} = $demand for commodity at market $j$ (cases)\n$d_{ij} = $distance between plant $i$ and market $j$ (thousand miles)\n$c_{ij} = F \\times d_{ij}$ shipping cost per unit shipment between plant $i$ and market $j$ (dollars per case per thousand miles)\nDistances Markets Plants New York Chicago Topeka Supply Seattle 2.5 1.7 1.8 350 San Diego 2.5 1.8 1.4 600 Demand 325 300 275 $F=$ $ per thousand miles\nDecision variables $x_{ij}=$ amount of commodity to ship from plant $i$ to market $j$ (cases), where $x_{ij} \u0026gt; 0$, for all $i,j$.\nConstraints Observe supply limit at plant $i: \\sum_{j}{x_{ij}} \\le a_{i}$ for all $i$ cases Satisfy demand at market $j: \\sum_{i}{x_{ij}} \\ge b_{j}$ for all $j$ cases The GAMS Model The same model modeled in GAMS. The use of concise algebraic descriptions makes the model highly compact, with a logical structure. Internal documentation, such as explanation of parameters and units of measurement makes the model easy to read.\nSets i canning plants / Seattle, San-Diego / j markets / New-York, Chicago, Topeka / ; Parameters a(i) capacity of plant i in cases / Seattle 350 San-Diego 600 / b(j) demand at market j in cases / New-York 325 Chicago 300 Topeka 275 / ; Table d(i,j) distance in thousands of miles New-York Chicago Topeka Seattle 2.5 1.7 1.8 San-Diego 2.5 1.8 1.4 ; Scalar f freight in dollars per case per thousand miles /90/ ; Parameter c(i,j) transport cost in thousands of dollars per case ; c(i,j) = f * d(i,j) / 1000 ; Variables x(i,j) shipment quantities in cases z total transportation costs in thousands of dollars ; Positive variables x ; Equations cost define objective function supply(i) observe supply limit at plant i demand(j) satisfy demand at market j ; cost .. z =e= sum((i,j), c(i,j)*x(i,j)) ; supply(i) .. sum(j, x(i,j)) =l= a(i) ; demand(j) .. sum(i, x(i,j)) =g= b(j) ; Model transport /all/ ; Solve transport using LP minimizing z ; ","excerpt":"\u003ch1 id=\"a-gams-example\"\u003eA GAMS example\u003c/h1\u003e\n\u003cp\u003eThis toy problem is presented only to illustrate how GAMS lets you model in a natural way. GAMS can handle much larger and highly complex problems. Only a few of the basic features of GAMS can be highlighted here.\u003c/p\u003e\n\u003ch2 id=\"algebraic-description\"\u003eAlgebraic Description\u003c/h2\u003e\n\u003cp\u003eHere is a standard algebraic description of the problem, which is to minimize the cost of shipping goods from 2 plants to 3 markets, subject to supply and demand constraints.\u003c/p\u003e","ref":"/products/gams/simple_example/","title":"GAMS Simple Example"},{"body":"","excerpt":"","ref":"/","title":"GAMS - Cutting Edge Modeling"},{"body":" The new MIRO logo\nWe are proud to announce the release candidate for GAMS MIRO 1.0, the very last stage before the official release. Since the last Beta (version 0.6) we have given MIRO a complete overhaul and improved it in terms of functionality, usability and stability. From this last test phase we hope to get valuable feedback for final improvements. Our thanks go to all the beta testers for contributing - we are excited to announce the official release soon!\nPlease find a list of updates in the GAMS MIRO release notes at /miro/release.html . If you are interested in getting access to the GAMS MIRO release candidate please contact miro@gams.com .\nYour GAMS MIRO Team\nThe traveling salesman problem solved in MIRO\n","excerpt":"We are proud to announce the release candidate for GAMS MIRO 1.0.","ref":"/blog/2020/02/announcing-miro-1.0rc/","title":"Announcing MIRO 1.0rc"},{"body":"Given the crucial role power distribution plays for the economy, and the challenges posed by renewable energy and increasing demand for electricity, ARPA-E is running the Grid Optimization Competition, which challenges participating teams to develop and test power system optimization and control algorithms on a range of different synthetic and real network models. This is another great example of how modeling and optimization impacts our lives and society! Watch the U.S. DOE competition announcement below:\n(The video will open on youtube.com) We would like to congratulate the teams winning the first challenge. As a sponsor of the event we are proud to see some of our long-term partners such as Nick Sahinidis (The Optimization Firm / Minlp) participating. Shout out to our solver partner Richard Waltz (Artelys / Knitro) and his team \u0026ldquo;NU_Columbia_artelys\u0026rdquo; for making it into the top 10, and to Cosmin Petra and his team for taking the top spot (Cosmin is also the author of PIPS-IPM, used in the BEAMME and UNSEEN projects in which we are involved).\n","excerpt":"\u003cp\u003eGiven the crucial role power distribution plays for the economy, and the challenges posed by renewable energy and increasing demand for electricity, ARPA-E is running the Grid Optimization Competition, which challenges participating teams to develop and test power system optimization and control algorithms on a range of different synthetic and real network models. This is another great example of how modeling and optimization impacts our lives and society! Watch the U.S. DOE competition announcement below:\u003c/p\u003e","ref":"/blog/2020/02/grid-optimization-competition-challenge-i-winners-announced-congratulations/","title":"Grid Optimization Competition Challenge I winners announced: congratulations!"},{"body":"","excerpt":"","ref":"/categories/gams-update/","title":"GAMS Update"},{"body":"Some of you might have noticed that coincident with releasing GAMS 30.1, we have introduced some changes to our licensing model . Below we summarize what has changed.\nDemo We have changed the way we allow users to test GAMS (\u0026ldquo;demo mode\u0026rdquo;). We now require registration on our website and generating a demo license before using the software. Some might find this process annoying, but before introducing this new scheme, we simply did have no idea how many people were using GAMS in demo mode. For a company trying to understand and serve its customer base, this was utterly unsatisfactory. With the new scheme, we collect and process some personal data, which we use to generate a named license file - after all, demo licenses are not meant to be shared between users. Four weeks later, all personal data is automatically deleted (names, email-address, IP-address, even the license file itself). We keep just enough data to compile statistics about how many demo licenses have been generated over time, and which countries and organisations these licenses are popular in.\nWith the introduction of the new demo licensing scheme we have partially lifted the demo limits: GAMS will now generate linear (LP, RMIP, MIP) models up to 2000 constraints and 2000 variables, and up to 1000 constraints and 1000 variables for all other model types. No other limits apply (e.g. non-zeros or discrete variables). Some solvers (e.g., the global solvers ANTIGONE, BARON, and LindoGlobal) might enforce additional restrictions.\nCommunity We have also introduced community licenses for non-commercial users. For most solvers this new license type allows even bigger models to be solved (5000 constraints and variables for linear, and 2500 constraints and variables for other model types). The idea behind community licenses is to enable users who are not (yet) ready to get a professional license to continue developing their models which have outgrown the demo limits. If you sign up for a demo license on our website with your university email address, we will send you a reminder about community licenses.\nWe are confident that the revamped licensing scheme will be useful for a lot of people. Please send any comments you have to info@gams.com .\n","excerpt":"\u003cp\u003eSome of you might have noticed that coincident with releasing GAMS 30.1, we have introduced some \u003ca href=\"/latest/docs/UG_License.html\" target=\"_blank\"\u003echanges to our licensing model\u003c/a\u003e\n. Below we summarize what has changed.\u003c/p\u003e\n\u003ch4 id=\"demo\"\u003eDemo\u003c/h4\u003e\n\u003cp\u003eWe have changed the way we allow users to test GAMS (\u0026ldquo;demo mode\u0026rdquo;). We now require registration on our website and generating a demo license before using the software. Some might find this process annoying, but before introducing this new scheme, we simply did have no idea how many people were using GAMS in demo mode. For a company trying to understand and serve its customer base, this was utterly unsatisfactory. With the new scheme, we collect and process some personal data, which we use to generate a named license file - after all, demo licenses are not meant to be shared between users. Four weeks later, all personal data is automatically deleted (names, email-address, IP-address, even the license file itself). We keep just enough data to compile statistics about how many demo licenses have been generated over time, and which countries and organisations these licenses are popular in.\u003c/p\u003e","ref":"/blog/2020/02/good-news-relaxed-demo-limits-and-a-new-community-license-scheme/","title":"Good news: relaxed demo limits, and a new community license scheme"},{"body":"","excerpt":"","ref":"/categories/sales/","title":"Sales"},{"body":"22 years ago, in 1998, we released the first native 32 bit windows version of GAMS. This allowed modelers to implement much bigger models than possible before. A year later, we introduced our 32-bit IDE (for those of you young enough to remember, Windows NT 4.0 and Windows 98 were state of the art back then). A lot of time has passed, and 64 bit is everywhere, even on your mobile phone.\nGAMS IDE, running on Windows XP (around 2003)\nIn 2018 we showed the first beta release of GAMS Studio. Studio is a more modern approach, compared to the known IDE. It is based on C++ and Qt and therefore runs on Windows, Mac OS, and Linux. With Studio we introduced block editing, smart typing, a project explorer and much more . The architecture of Studio will allow us to implement features more easily and makes the code more maintainable compared to the IDE. Studio is a 64-bit application.\nGAMS Studio, showing the Project Explorer and multiple tabs\nIn light of these developments, and after carefully reviewing the facts, we have decided to phase out 32-bit support on Windows with the release of GAMS 30 for the following reasons:\nFirstly, on Windows, GAMS Studio with all its new features can only be compiled with current versions of Microsoft Visual Studio. This presents no problem on 64-bit Windows, but it conflicts with the requirement imposed by the diminishing 32-bit support by solver vendors that we use older Visual Studio versions for the 32-bit build. Secondly, most solvers we bundle with GAMS ship as pre-compiled DLLs. The latest versions of these solvers are often only available as 64-bit binaries, (examples are CPLEX, BARON, or GUROBI). To include 32-bit libraries of those solvers, we would have to use older versions of the solver DLLs. This leads to a 32-bit system that is progressively less competitive than its 64-bit sister. Thirdly, maintaining a 32-bit system requires us to continue using some outdated APIs and coding practices instead of more modern improvements. Finally, we believe that there is little to no demand for a 32-bit system, so that any effort put into maintaining this platform could be better applied elsewhere.\nIn short, we have gradually been pushed towards a 32-bit version that lacks GAMS Studio and some of our most important solvers. We find that a \u0026ldquo;solver-poor\u0026rdquo; GAMS distribution would not be very useful for anyone, and we decided to drop 32-bit GAMS entirely.\nWe carefully considered the implications for our user base, and found that they are minor: RAM is cheap these days, and the extra memory requirements of 64-bit applications do not matter much in practice.\nSo, after 22 years of 32-bit GAMS, it is time to say goodbye. With GAMS release 30, Win32 will be a peripheral platform, available on demand only. With the following release 31, Win32 GAMS will be discontinued, with the exception of some of the utilities, e.g. gdxxrw and GDX DLL, which will still be shipped as 32-bit executables.\n","excerpt":"\u003cp\u003e22 years ago, in 1998, we released the first native 32 bit windows version of GAMS. This allowed modelers to implement much bigger models than possible before. A year later, we introduced our 32-bit IDE (for those of you young enough to remember, Windows NT 4.0 and Windows 98 were state of the art back then). A lot of time has passed, and 64 bit is everywhere, even on your mobile phone.\u003c/p\u003e","ref":"/blog/2020/01/phasing-out-of-32-bit-support-with-gams-30/","title":"Phasing out of 32-bit support with GAMS 30"},{"body":"GAMS offices are geographically distributed between different cities in the US (GAMS Development Corp.) and Germany (GAMS Software GmbH). The teams try to meet regularly - this time we enjoyed our Christmas party at Henk Mulder\u0026rsquo;s cooking school, where we could try our hands at preparing our own Christmas dinner. Not everyone was there, but fortunately it still worked a treat - have a look at the pictures.\nHappy Christmas from the GAMS teams!\n\u0026times; Previous Next Close ","excerpt":"\u003cp\u003eGAMS offices are geographically distributed between different cities in the US (GAMS Development Corp.) and Germany (GAMS Software GmbH). The teams try to meet regularly - this time we enjoyed our Christmas party at Henk Mulder\u0026rsquo;s cooking school, where we could try our hands at preparing our own Christmas dinner. Not everyone was there, but fortunately it still worked a treat - have a look at the pictures.\u003c/p\u003e","ref":"/blog/2019/12/the-2019-gams-software-christmas-feast/","title":"The 2019 GAMS Software Christmas feast"},{"body":"This year, Lleny, Franz, Lutz, Michael, and Steve traveled to Seattle for the INFORMS Annual Meeting 2019. Lutz and Steve held a Pre-Conference Workshop on Saturday, talking about GAMS and our interactive web application framework GAMS MIRO, which was well received. On Tuesday, Michael gave a Conference Talk about Solving Energy System Models with GAMS on HPC platforms.\nFor the days of the conference, there was a steady stream of attendees showing up at the GAMS booth in the exhibit hall to talk about their needs and how we can partner with them in moving these forward. The Conference was well-organized with plenty of interesting talks and workshops to attend. We were happy to reconnect with a lot of familiar faces, and also made many new interesting acquaintances. Thanks for the fun and interesting conversations we had with you at our booth or in between talks!\nThe slides from our talks are online, in case you would like to learn more. just click on the links below.\nName: Size / byte: GAMS_HPC_INFORMS2019.pdf 2655554 MIRO_talk-compressed.pdf 1857468 MIRO_workshop-compressed.pdf 1514779 ","excerpt":"\u003cp\u003eThis year, Lleny, Franz, Lutz, Michael, and Steve traveled to Seattle for the INFORMS Annual Meeting 2019. Lutz and Steve held a Pre-Conference Workshop on Saturday, talking about GAMS and our interactive web application framework GAMS MIRO, which was well received. On Tuesday, Michael gave a Conference Talk about Solving Energy System Models with GAMS on HPC platforms.\u003c/p\u003e","ref":"/blog/2019/10/informs-annual-meeting-in-seattle/","title":"INFORMS Annual Meeting in Seattle"},{"body":" GAMS was attending the OR2019 in Dresden. But I don\u0026rsquo;t want to report about the conference itself at this point. We gave a workshop, held talks, and had a booth - the usual. However, in Dresden, I was asked not for the first time what GAMS actually does at such a conference.\nSo, what do we actually want there?\nWell\u0026hellip; in the very spontaneous situation of the question I felt a bit surprised and just stammered together some answers like “GAMS users are 50% academics”, “other software companies are here as well” or “the good food (haha).” That\u0026rsquo;s correct, but it wasn\u0026rsquo;t compelling… After that, I thought more about it. And I realized that there are several good reasons for a company like GAMS to go to such conferences.\nThe most obvious first: GAMS is a sponsor of the event. This is of course done primarily for marketing reasons. As a sponsor, you can have a booth at the conference and show your presence. But in fact, we have the booth not only for marketing reasons. A booth is also an excellent meeting place. Not only people who are looking for more information about GAMS come here. People often come to us and tell us about their experiences with GAMS, positive as well as negative. Usually, requests for possible new features are expressed, which sometimes leads to ideas for new functionalities in our tools. Or questions are asked, which we can then answer with, so far for them unknown or upcoming features. Of course, we also inform about the latest developments. Next to GAMS, several other software companies report about what is going on in their R\u0026amp;D.\nIn any case, we gain valuable insights how our tool is used, where difficulties in comprehension may exist, and what else we could work on. A quote from Bill Gates fits here: “We all need people who will give us feedback. That’s how we improve.” Conferences are also a perfect place for networking. Make new contacts, maintain existing ones. Sometimes there are opportunities for potential internships (and more).\nMany of our employees are very much involved in the field of operations research. Of course, we are interested in seeing what is currently being worked on in the academic field and always want to be up to date in terms of current research and challenges. Conferences like the OR are an excellent choice for this. And last but not least: The OR community is very nice and familiar, you know and appreciate each other. Meeting again at conferences is fun and strengthens the connections.\nI hope I can remember this text the next time I\u0026rsquo;m asked what GAMS is doing at the conference.\nName: Size / byte: GAMS_HPC_OR2019.pdf 1856348 MIRO_talk_dresden.pdf 2652251 ","excerpt":"\u003cfigure\u003e\u003cimg src=\"/blog/2019/09/why-was-gams-at-the-or2019-in-dresden/pic1.jpg\" height=\"350\"\u003e\n\u003c/figure\u003e\n\n\u003cp\u003eGAMS was attending the OR2019 in Dresden. But I don\u0026rsquo;t want to report about the conference itself at this point. We gave a workshop, held talks, and had a booth - the usual.\nHowever, in Dresden, I was asked not for the first time what GAMS actually does at such a conference.\u003c/p\u003e","ref":"/blog/2019/09/why-was-gams-at-the-or2019-in-dresden/","title":"Why was GAMS at the OR2019 in Dresden?"},{"body":" On May 31- June 2, 2019, our reseller Beijing Uone Info \u0026amp; Tech Co. , Ltd hosted a three-day course on CGE Modeling with GAMS at the South China Agricultural University in Guangzhou, China.\nMore than 20 participants attended this successful training course. “We enjoyed the opportunity to connect with some GAMS users from different universities there to hear about their projects and to answer some of their questions along the way,” said Crystal from Beijing Uone Info \u0026amp; Tech Co., Ltd.\nThe course was led by Professor Lou Feng who is a Senior Research Fellow and Director at the Department of Economic Systems, Institute of Quantitative \u0026amp; Technical Economics at the Chinese Academy of Social Sciences. He is specialized in CGE modeling with a focus on policy analysis as well as the theory and application of DSGE models.\nProfessor Lou Feng explained the basics of GAMS through simple examples. Then he moved on to more advanced, frequently used functions and features of GAMS. He also addressed difficulties and problems that the participants are facing in their work, analyzing real-world examples and showing the participants how to deal with complicated issues.\nThe course was aimed at improving the technical competence of relevant scientific and technological experts. The participants actively raised questions, exchanged ideas and shared their experiences on a broad range of topics.\nAs a reseller of GAMS in China, Beijing Uone Info\u0026amp;Tech Co., Ltd is planning to offer Chinese GAMS users with more training and learning opportunities in the future. They will host a series of offline training courses and meetings on GAMS soon.\n","excerpt":"\u003cfigure\u003e\u003cimg src=\"/blog/2019/05/beijing-uone-gams-workshop/pic1.jpg\" height=\"400\"\u003e\n\u003c/figure\u003e\n\n\u003cp\u003eOn May 31- June 2, 2019, our reseller \u003ca href=\"http://www.uone-tech.cn\" target=\"_blank\"\u003eBeijing Uone Info \u0026amp; Tech Co.\u003c/a\u003e\n, Ltd hosted a three-day course on CGE Modeling with GAMS at the South China Agricultural University in Guangzhou, China.\u003c/p\u003e\n\u003cp\u003eMore than 20 participants attended this successful training course. “We enjoyed the opportunity to connect with some GAMS users from different universities there to hear about their projects and to answer some of their questions along the way,” said Crystal from Beijing Uone Info \u0026amp; Tech Co., Ltd.\u003c/p\u003e","ref":"/blog/2019/05/beijing-uone-gams-workshop/","title":"Beijing Uone - GAMS Workshop"},{"body":"","excerpt":"","ref":"/categories/courses/","title":"Courses"},{"body":" Three very exciting and well-organized workshops on advanced GAMS features as well as CGE modeling took place this May. For this adventure, Michael, Steve and Freddy from GAMS teamed up with Agapi Somwaru who recently retired from USDA and Xinshen Diao from IFPRI, two experts in CGE modeling. Yumei Zhang from the Chinese Academy of Agricultural Sciences (CAAS) was our main local contact for the entire trip. After arriving in Beijing and having the opportunity to visit some incredible places like the Summer Palace and the Great Wall, we started our first two-day workshop in Beijing. CGE modeling in GAMS turned out to be a hot topic! Yumei had to reschedule our room twice, since more than a hundred people had registered for the course.\nDuring the two days we covered a wide range of topics, from dynamic sets to the PEATSim model, a dynamic partial-equilibrium model developed by the U.S. Department of Agriculture and presented by Agapi. In addition, some of the latest GAMS features like the Embedded Code facility, Jupyter notebooks and GAMS MIRO were introduced. A day of traveling with the lightning-fast Chinese bullet train to Hangzhou followed, where we held a similar workshop at Zhejiang University. The audience in both workshops was highly motivated and we had some very interesting discussions. After visiting the gorgeous West Lake in Hangzhou as well as the Forbidden City back in Beijing, we were invited to the Beijing Institute of Technology. There we met a group of very productive students and professors from the Center for Energy and Environmental Policy Research. After presenting some custom material on CGE modeling and nonlinear stochastic programming, we worked together to solve some problems they faced in their current work. Additionally, we had the opportunity to visit their impressive research center, where they bundle the expertise of researchers and practitioners to answer strategic questions on energy consumption and climate change. All in all, it was an unforgettable trip with many interesting encounters that benefited both sides. We hope this wasn’t our last journey to China.\n","excerpt":"\u003cfigure\u003e\u003cimg src=\"/blog/2019/05/gams-cge-modeling-and-its-applications-in-china/pic1.jpg\" height=\"400\"\u003e\n\u003c/figure\u003e\n\n\u003cp\u003eThree very exciting and well-organized workshops on advanced GAMS features as well as CGE modeling took place this May. For this adventure, Michael, Steve and Freddy from GAMS teamed up with Agapi Somwaru who recently retired from USDA and Xinshen Diao from IFPRI, two experts in CGE modeling. Yumei Zhang from the Chinese Academy of Agricultural Sciences (CAAS) was our main local contact for the entire trip. After arriving in Beijing and having the opportunity to visit some incredible places like the Summer Palace and the Great Wall, we started our first two-day workshop in Beijing. CGE modeling in GAMS turned out to be a hot topic! Yumei had to reschedule our room twice, since more than a hundred people had registered for the course.\u003c/p\u003e","ref":"/blog/2019/05/gams-cge-modeling-and-its-applications-in-china/","title":"GAMS, CGE modeling and its applications in China"},{"body":" In April this year, Franz, Lleny, Freddy, and Marius traveled to Austin for the INFORMS Business Analytics Conference. Franz and Freddy held a Pre-Conference Workshop on Sunday morning, talking about GAMS and our interactive web application framework GAMS MIRO, which was well received. Then in the evening, the conference started.\nFor the next two days, there was a steady stream of attendees showing up at the GAMS booth in the exhibit hall to talk about their business needs and how we can partner with them in moving these forward. On Monday evening Franz and Freddy attended the Edelman-Gala-Dinner and enjoyed the Presentation of the Franz Edelman Awards. All in all was it a very well-organized conference with plenty of interesting talks and workshops to attend.\nThe slides from our talk are online, in case you would like to learn more. You can find the presentation to the talk below.\nName: Size / byte: Workshop_Austin.pdf 2386710 ","excerpt":"\u003cfigure\u003e\u003cimg src=\"/blog/2019/04/informs-business-analytics-conference-in-austin/pic1.png\" height=\"400\"\u003e\n\u003c/figure\u003e\n\n\u003cp\u003eIn April this year, Franz, Lleny, Freddy, and Marius traveled to Austin for the INFORMS Business Analytics Conference. Franz and Freddy held a Pre-Conference Workshop on Sunday morning, talking about GAMS and our interactive web application framework GAMS MIRO, which was well received. Then in the evening, the conference started.\u003c/p\u003e","ref":"/blog/2019/04/informs-business-analytics-conference-in-austin/","title":"INFORMS Business Analytics Conference in Austin"},{"body":"GAMS staff Michael Bussieck and Robin Schuchmann attended the meeting Mathematical Optimization for the Hidden Champions of the GOR working group Real World Optimization at the Chamber of Industry and Commerce (IHK) in Pforzheim, Germany. The focus of the 2-day event was on optimization-based analysis and decision making, especially from the perspective of small businesses in the manufacturing industry.\nA series of presentations from participants including Michael\u0026rsquo;s and Robin\u0026rsquo;s presentation Algebraic Modeling Tools for Small Businesses - Challenges and Outlook were complemented by a panel discussion. Two very different paths to face the challenge of promoting mathematical optimization in small businesses were discussed: generic software solutions for typical OR problems in the manufacturing business versus customer- and problem-specific solutions. On the one hand, dealing with specialized software is time-consuming and therefore incompatible with rapid business growth. Only by offering standard software, which only has to be adapted slightly for customers, larger steps are possible. On the other hand, small businesses success depends on a high degree of specialization. This uniqueness is the key to the company\u0026rsquo;s success and must therefore be protected. The use of optimization software should not lead to the company adapting to the software, but the other way around. There was general agreement that the methodology mathematical optimization is already difficult to promote in the market. Efforts must be made to reach those who have not previously dealt with this topic.\nAnother aspect discussed extensively at the conference was the increasingly important field of data visualization. On part of the GAMS presentation was about our interactive web application framework GAMS MIRO, which was well received.\nThe social program of the well-organized conference included a conference dinner and tour of the G.Rau company which specializes in the manufacturing of strips, tubes and wires made of precious metals, special alloys and composite materials. As the conference took place in Pforzheim, also known as the Golden City due to its jewelry and watch-making industry, a fitting mode of transportation was provided: the Goldliner, a bus completely plated with gold.\nAll in all, a very productive and informative event. We are looking forward to the next GOR Real World Optimization working group meeting!\nName: Size / byte: gor_hidden_champions.pdf 3100789 ","excerpt":"\u003cp\u003eGAMS staff Michael Bussieck and Robin Schuchmann attended the meeting Mathematical Optimization for the Hidden Champions of the GOR working group Real World Optimization at the Chamber of Industry and Commerce (IHK) in Pforzheim, Germany. The focus of the 2-day event was on optimization-based analysis and decision making, especially from the perspective of small businesses in the manufacturing industry.\u003c/p\u003e","ref":"/blog/2019/04/gams-at-the-mathematical-optimization-for-hidden-champions-102.-meeting-2019-in-pforzheim/","title":"GAMS at the Mathematical optimization for hidden champions (102. Meeting) 2019 in Pforzheim"},{"body":" As usual, CAPD was a well-organized 2-day conference. It is small enough that there are no parallel sessions. So the group was together from breakfast through dinner, which allows for and fosters lots of interaction on professional and social levels. The venue - CMU\u0026rsquo;s campus and the conference dinner at Monterey Bay - was very pleasant and conducive to good discussion. We enjoyed the opportunity to connect with some GAMS users from industry there to hear about their projects and to answer some of their questions along the way.\nThere were a variety of talks from professors, students as well as industry representatives including some very interesting presentations from the field of system process engineering and artificial intelligence. Our own talk was near the end of the conference, and everybody is usually rushing to the airport when the last session ends. But in spite of being at the end of a fully-packed schedule, Freddy managed to capture the audience’s attention and we could connect our Jupyter stuff and MIRO with some interest in better deployment tools already expressed during the conference.\nThe slides from our talk are online, in case you would like to learn more. You can find the presentation to the talk below.\nSteven Dirkse and Frederik Proske\nName: Size / byte: 2019_CAPD_GAMS-MIRO_FP.pdf 1756259 ","excerpt":"\u003cp\u003e\u003cfigure\u003e\u003cimg src=\"/blog/2019/03/capd-annual-review-meeting-carnegie-mellon/pictitel.png\" height=\"400\"\u003e\n\u003c/figure\u003e\n\n\u003cfigure\u003e\u003cimg src=\"/blog/2019/03/capd-annual-review-meeting-carnegie-mellon/pic1.jpg\" height=\"300\"\u003e\n\u003c/figure\u003e\n\u003c/p\u003e\n\u003cp\u003eAs usual, CAPD was a well-organized 2-day conference. It is small enough that there are no parallel sessions. So the group was together from breakfast through dinner, which allows for and fosters lots of interaction on professional and social levels. The venue - CMU\u0026rsquo;s campus and the conference dinner at Monterey Bay - was very pleasant and conducive to good discussion. We enjoyed the opportunity to connect with some GAMS users from industry there to hear about their projects and to answer some of their questions along the way.\u003c/p\u003e","ref":"/blog/2019/03/capd-annual-review-meeting-carnegie-mellon/","title":"CAPD Annual Review Meeting (Carnegie Mellon)"},{"body":" This February, I spent three days at IEWT (Internationale Energiewirtschaftstagung) in beautiful Vienna, Austria. Even though the \u0026ldquo;I\u0026rdquo; stands, supposedly, for \u0026ldquo;International\u0026rdquo;, virtually all of the attendants were either Austrian or German.The event was very successfully hosted by TU Vienna - which appears to be the alma mater of most Austrian participants.\nUnsurprisingly, most of the talks tackled problems specific to the energy sector. Only a rather small 2-hour segment was dedicated to modeling. My own contribution was a 15-minute talk about BEAM-ME that was attended by 20-30 people.\nBEAM-ME is a project funded by the German Federal Ministry for Economic Affairs and Energy (BMWi) that addresses the need for efficient solution strategies for Energy System Models (ESMs). The investigated solution approaches involve changes to the formulation of the ESMs, parameter tuning to improve the performance of general-purpose LP solvers but also the development of new solution algorithms that focus on parallelization to utilize the power of High-Performance Computers (HPC). Many ESMs have a block structure that is not well exploited by general purpose LP solvers but that makes the model in general suitable for structure exploiting parallel algorithms. BEAM-ME is an ongoing research project but first results already show significant speedup potentials for particular solution approaches.\nYou can find the presentation to the talk here.\nName: Size / byte: 2019_IEWT.pdf 881292 ","excerpt":"\u003cfigure\u003e\u003cimg src=\"/blog/2019/02/iewt-conference-in-vienna/pic1.jpg\" height=\"400\"\u003e\n\u003c/figure\u003e\n\n\u003cp\u003eThis February, I spent three days at \u003ca href=\"https://iewt2019.eeg.tuwien.ac.at/\" target=\"_blank\"\u003eIEWT (Internationale Energiewirtschaftstagung)\u003c/a\u003e\n in beautiful Vienna, Austria. Even though the \u0026ldquo;I\u0026rdquo; stands, supposedly, for \u0026ldquo;International\u0026rdquo;, virtually all of the attendants were either Austrian or German.The event was very successfully hosted by TU Vienna - which appears to be the alma mater of most Austrian participants.\u003c/p\u003e","ref":"/blog/2019/02/iewt-conference-in-vienna/","title":"IEWT Conference in Vienna"},{"body":" Area: energy\nProblem class: MIP\nOptimizing Power Trading Auctions A GAMS Application in the Energy Sector The German “Energiewende” (transition of the energy system) aims for a power system on the basis of renewable energy sources (RES). In 2015, about 30% of the power generation was provided by RES. The RES target for 2025 is a 40-45% share and 55 to 60% are expected in 2035. Large parts of these shares are based on fluctuating wind and solar power. For a successful “Energiewende”, a highly efficient matching of generation and demand will be required.\nThe Auctioning Markets One important element to achieve that is the organization of ancillary services. Those are fast reacting generators used by the transmission system operators (TSOs) to smooth short-term imbalances. These balancing reserves are procured by the TSOs via auctioning markets. Potential suppliers place related offers and the TSOs select the most cost-efficient combination of bids to cover the requirements.\nSupported by several TSOs, the 50Hertz Transmission GmbH, TSO of Eastern Germany and part of the Elia group, manages an internet platform to collect bids and execute described auctions of balancing reserves. In the case of primary reserve, which is the fastest reacting reserve, suppliers for the reserve demand of Austria, Belgium, Germany, France, the Netherlands and Switzerland (status of January 2017) are selected by the platform. The outcome of the auction has to respect all criteria of an objective and legally binding selection process. In particular, the cost effectiveness of the selection has to be guaranteed. Additionally, the auctions are integrated into a dense process of power trading (exchange or OTC, auctions or continuous trading) and grid operation addressing many different players and data communication.\nAt the Transmission Control Centre, employees monitor and control the 50Hertz transmission grid around the clock The Optimization Solution As a technical solution for a timely and robust calculation of the auction results, the optimization software GAMS was chosen. The auctions translate to a mixed-integer optimization problem in which each country has a certain reserve demand. Exchanges of reserves are possible with respect to export limits and core shares (maximal total import for a country). Depending on the specific product, different kinds of bids are allowed in the auctions: divisible and indivisible bids and conditional or unconditional bids. Divisible bids can be selected with less than their bid volume while conditional bids are bundles of bids out of which only one single bid can be selected. A combination of bids is selected so that the demand in all countries is covered with minimal costs. A particularity is the event of equally optimal solutions. According to the auction conditions, the bids that were chronologically placed first have priority – when not altering the total costs. This is solved by two sequential optimizations. A cost minimization is executed, followed by a minimization of the bid time stamps with an additional condition restricting any increase in costs. CPLEX is the selected solver applying an extended solution pool so that no possible combination is neglected.\nThe reserve market is currently characterized by many ongoing changes of market design and regional scope. Thanks to the implemented solution, the incorporation of these changes was easy and straightforward. In total, the organization of the reserve market becomes more and more efficient reducing the electricity costs for all consumers despite the challenges of the “Energiewende”.\nThe optimization solution is implemented as a GAMS model About 50Hertz 50Hertz is responsible for the operation, maintenance, planning and expansion of the 380/220 kilovolt transmission network in the north and east of Germany. The network covers an area of ​​109,360 km² and has a length of about 10,000 km, which corresponds approximately to the route from Berlin to Rio de Janeiro. It secures the grid integration of about 40% of the total installed wind power in Germany. 50Hertz provides a safe power supply to 18 million people - 24 hours a day, 7 days a week, 365 days a year.\nhttps://www.50hertz.com ","excerpt":"Supported by several TSOs, the 50Hertz Transmission GmbH, TSO of Eastern Germany and part of the Elia group, manages an internet platform to collect bids and execute described auctions of balancing reserves. A GAMS model helps them match supply with demand in an efficient way.","ref":"/stories/50hertz/","title":"Optimizing Power Trading Auctions at 50 Hertz"},{"body":" Area: pharmaceutical\nProblem class: MIP\nCyBio scheduler - scheduling software for high throughput screening A GAMS application in the pharmaceutical industry High Throughput Screening is a scientific experimentation method widely used in pharmaceutical research especially in the field of drug discovery. Because the large number of promising compounds for new drugs cannot be analyzed by manual labor, the screening process is automated using robotics. Robotic screening systems are used to handle microplates containing chemical compounds. These robotic screening systems perform a sequence of tasks and experiments on a given set of microplates – called the assay protocol – and generate experimentation data.\nCyBio, merged into Analytik Jena AG in 2009, and the Max Planck Institute Magdeburg developed optimization methods involving GAMS to increase the throughput of robotic screening systems.\n\u0026ldquo;The GAMS driven assay optimization has significantly boosted the production rate of high throughput screening systems and improved the quality of the experimentation data.\u0026rdquo;\nA compound microplate used in high throughput screening The problem Before assay optimization involving GAMS was an integral part of CyBio Scheduler only experts were able to modify the timing to improve the throughput. This was feasible only for relatively small assays and was a task that involved hours of focused work. With growing complexity of assay protocols, this task is nowadays far beyond what human labor can handle.\nAnother common issue before an algebraic model described the screening systems with a focus on reducing idle time, were inefficiencies in the utilization of critical resources. Idle time of the compounds also leads to systematic errors in the experimentation data due to sedimentation, decay, or temperature drift.\nRobotic arm for microplate handling The setup A central part of the CyBio Scheduler is an algebraic model written in GAMS. It describes the screening systems in a way that allows the minimization of idle time for any component ensuring the most efficient utilization ratio for critical resources. Several resources may be used for different tasks, so it is possible for the screening system to simultaneously process a number of microplates using else idle devices. Short and direct microplate transfers facilitate an efficient resource usage and thereby a high production rate. The model avoids conflicts when coordinating resource access and ensures that the resulting schedule is deadlock free.\nA number of constraints are inherent to the system, such as limited temporary storage or resources which cannot be used simultaneously and to which access must be coordinated. Some constraints are assay specific. Typically the user defines the target time for incubation periods including an upper and lower bound, or the maximum time between specific events. So, for example, the time between compound addition and measurement may be limited. Assay definition and these constraints build a system of disjunctive inequalities.\nDue to the strict timing, micro plates follow an identical itinerary for each cycle. Fast and uniform microplate processing with the CyBio Scheduler reduces systematic errors introduced by sedimentation, decay, or temperature drift, which are difficult to quantify. An increased throughput therefore not only reduces the investment per experiment but also improves the data quality.\nA screenshot of the CyBio system User friendliness With the GAMS model running in the background, the CyBio Scheduler focuses on providing a simple and convenient user experience. It manages to hide the complexity of mapping an assay protocol to the current system design and finding the global optimal solution for the objective to minimize the cycle time.\nThe user is relieved from system layout decisions and can focus on the experiment. The CyBio Scheduler automatically inserts microplate transports where they are required, resolves conflicts in resource allocation and allows for incubations to be effortlessly specified. Depending on the number of independent tasks, involved components and constraints the resulting model may become considerably complex. However, the optimal solution is typically calculated fast enough to allow the user to verify if relaxing some constraints may lead to a better result.\nAbout Analytik Jena Analytik Jena is a provider of instruments and products in the areas of analytical measuring technology and life science. Its portfolio includes the most modern analytical technology and complete systems for bioanalytical applications in the life science area. Comprehensive laboratory software management and information systems (LIMS), service offerings, as well as device-specific consumables and disposables, such as reagents or plastic articles, complete the Group’s extensive range of products.\nhttps://www.analytik-jena.de ","excerpt":"High Throughput Screening is a scientific experimentation method widely used in pharmaceutical research especially in the field of drug discovery. Because the large number of promising compounds for new drugs cannot be analyzed by manual labor, the screening process is automated using robotics. A GAMS model helps increase experimental throughput.","ref":"/stories/analytikjena/","title":"Analytik Jena"},{"body":" The announcement that the US Military Academy’s football team, the Army West Point Black Knights, had received a bid to the Armed Forces Bowl on 23 Dec was good news for the team and its fans and supporters everywhere, but it posed a problem for those tasked with scheduling the Term End Exams at the Academy: It meant the rescheduling of 566 exams of 141 affected cadets.\nThe published exam schedule conflicted \u0026ndash; substantially \u0026ndash; with the team’s travel to Texas for the bowl game. Fortunately, the Academy is using a system that integrates the registrar’s databases, a scheduling engine based on several GAMS models, and other tools to create a complete exam schedule for both the courses and the students. This schedule assigns each exam to an exam period and each student to all of her exams. The system balances the competing goals and constraints involved in preparing a schedule that is fair and balanced for the students and faculty, limits the number of make-up exams required, and fits within the constraints of the available rooms and the 11-period allocated for the exams.\nFaced with the unexpected requirement that some students would need to complete exams early (specifically, by the end of the third exam period) to allow for travel to the bowl game, the exam scheduling group was able to quickly reschedule affected students into the available exam slots of the first 3 periods of the published exam schedule and some additional exams placed in extra periods added prior to the published first period. The automated nature of the system allowed them to quickly experiment with different scenarios (Do just the players leave early? What about the band and the Rabble Rousers? How many additional exam periods need to be introduced?) and see what an exam schedule would look like for each alternative being considered. Ultimately, 3 early exam periods were added and at most one additional make-up exam per course was introduced to reschedule 566 individual exams of 141 affected cadets to allow the football team to complete their exams prior to leaving for the bowl game in Forth Worth, TX.\nGO ARMY! BEAT NAVY!\n","excerpt":"The announcement that the US Military Academy’s football team, the Army West Point Black Knights, had received a bid to the Armed Forces Bowl on 23 Dec was good news for the team and its fans and supporters everywhere, but it posed a problem for those tasked with scheduling the Term End Exams at the Academy.","ref":"/blog/2017/11/army-goes-bowling/","title":"Army Goes Bowling"},{"body":"Sometimes models do odd things like reporting problems as infeasible, stuck or falsely optimal when scaling is the real issue. To avoid this or correct such issues it is often desirable to check scaling and in turn rescale the model or ask the solvers to employ more aggressive scaling.\nIn terms of solver scaling most LP/MIP solvers do automatic scaling and a number have the option to apply a more aggressive scaling to numerically difficult models, e.g. Cplex option scanind, Gurobi option scaleflag, or Xpress option scaling. However, modelers can typically do better because they know how certain variables and equations are interrelated and can scale with common factors while the solvers do not have that knowledge. Generally, for difficult problems it is desirable to do the scaling as discussed below then let the solver scale as well. Scaling is typically not a concern for small problems.\nIdeally, in doing a scaling exercise, one should modify the model so the absolute value of the constraint matrix coefficients are centered around one including the derivatives of any nonlinear terms (at the starting point). This involves manipulating variable and equation scaling. The target of this manipulation is after scaling when dividing the largest matrix coefficient by the smallest the ratio should be no more than 1000 to 10000 ie the coefficient would span from 0.01 to 100 or something like that. Arne Drud author of CONOPT in his CONOPT3 solver manual indicates\nBasic and superbasic solution values are expected to be around 1, e.g. from 0.01 to 100. Nonbasic variables will be at a bound, and the bound values should not be larger than say 100. Dual variables (or marginals) on active constraints are expected to be around 1, e.g. from 0.01 to 100. Dual variables on non-binding constraints will of course be zero. Derivatives (or Jacobian elements) are expected to be around 1, e.g. from 0.01 to 100. You should select the unit of measurement for the variables so their expected value is around unity. After you have selected units for the variables you should select the unit of measurement for the equations so the expected values of the individual terms are around one. One can do this external to GAMS or can employ scaling within GAMS. The basic concept is discussed in chapter 17 of McCarl and Spreen here and involves altering all coefficients for a variable by the same amount and doing the same for all coefficients in an equation. Consider the following example\nMax X1 - 500X2 - 400X3 - 500X4 s.t. X1 - 10000X2 - 8000X3 \u0026le; 0 X2 + 4X3 - 50X4 \u0026le; 0 1500X2 + 2000X3 \u0026le; 1200 50X2 + 45X3 \u0026le; 60 X1 X2 X3 X4 \u0026ge; 0 Which can be scaled to become\nMax 20X1 - X2 - X3 - 0.1X4 s.t. X1 - X2 - X3 \u0026le; 0 X2 + X3 - X4 \u0026le; 0 X2 + 1.667X3 \u0026le; 0.8 X2 + 1.125X3 \u0026le; 1.2 X1' X2' X3' X4' \u0026ge; 0 This would be done by\nDividing the coefficients in the first constraint by 1000. Dividing the coefficients in the second constraint by 5 Dividing the coefficients in the third constraint by 1500, Dividing the coefficients in the fourth constraint by 50. Dividing the coefficients in the objective coefficient by 50. Multiplying the coefficients for X1 by 1000 Leaving the coefficients for X2 alone Multiplying the coefficients for X3 by 1.25 Multiplying the coefficients for X4 by 1/10 Both the original model and the manually scaled one are in the example scale2.gms .\nIn turn after solution of this scaled model, one can recover the scaled solution by appropriately multiplying and dividing by the scaling factors as explained in McCarl and Spreen. Specifically one would recover the solution to the original model by multiplying the scaled model solution values for variables by the variable scaling factor (so X1 would be the scaled solution level for X1 times 1000). Similarly the variable marginals would be multiplied by the objective function scaling factor in turn divided by the variable scaling factor (so for X1 the marginal would be the scaled marginal times 0/10000). At the same time, the equation slacks or levels would be multiplied by the row scaling factor (for the first equation multiplied by 1000) and the equation marginals times the objective function scaler divided by the constraint scaler (for equation 1 multiply by 50/1000). Finally, the objective function value would be multiplied by the objective function scalar.\nTwo questions arise from this. First, do you really need to do the manual scaling and descaling? Second, how do I form the scaling factors?\nOn the first question, you are in luck. GAMS has an easy way of specifying the scaling factors and automatically descales the solution so no manual procedures are required. This is covered in the McCarl Expanded User Guide in the scaling section. For the example above the example scale2.gms contains an implementation of the GAMS scaling procedure and the relevant part is.\nPARAMETER SCALEPROC(PROCESS) VARIABLE SCALING / x1 10000, x2 1, x3 1.25,x4 0.1/ PARAMETER SCALERESOR(RESOURCE) EQUATION SCALING /c1 10000, c2 5, c3 1500,c4 50/; scalar rhsscale /1/ scalar objscale /500/; PRODUCTION.scale(PROCESS)=SCALEPROC(PROCESS); objt.scale=objscale; AVAILABLE.scale(RESOURCE)=SCALERESOR(RESOURCE); RESALLOC.scaleopt=1; Where the code specifies the scaling factors, copies them into the GAMS scaling mechanism in turn specifies a model attribute that tells GAMS to solve the scaled model.\nOn the second question, one can get support on identifying coefficient magnitudes via the following approaches:\nOne can following Drud\u0026rsquo;s CONOPT3 document, use LIMROW, LIMCOL and search for large or small magnitude numbers within columns and rows then devise scaling factors. One can use GAMSCHK (see the solver manual on this here) with first the BLOCKLIST or BLOCKPIC commands to find blocks of variables or equations with large or small numbers or consistent departures from values of one and then later MATCHIT to find individual cases. These GAMSCHK procedures give largest and smallest magnitudes of coefficients in equations and variables so you don’t have to search. This can also be supported by use of DISPLAYCR. Additionally GAMS personnel mentioned use of the CONVERT solver with the option \u0026lsquo;jacobian\u0026rsquo;. This gives a gdx file with that contains the matrix coefficients which for nonlinear terms the gradients of the nonlinear terms (evaluated at the starting point- as limrow/col and GAMSCHK does). Then one can export that file to Excel and manipulate. Note when doing this Convert replaces all the internal variable and equation names. Subsequently the variables are named x1,x2,x3,\u0026hellip; and the equations e1,e2,e3,\u0026hellip;\u0026gt; To get back to the original names one needs to manually back translate using the CONVERT generated dictionary file. I have no experience with this but it just does not seem practical in large models as the GDX and Excel might be unwieldy and the back translation awkward. GAMS personnel also told me about a new option in Cplex 12.7 has a new option (datacheck=2) that \u0026ldquo;looks and reports\u0026rdquo; suspicious item in a model. In turn the log file (not the LST file) contains information about large and small coefficients, rhs values and anything else Cplex thinks can cause numerical instability. I tried this on what I considered an intermediate sized model and got messages like CPLEX Warning 1043: Detected righthand side \u003c= CPX_FEAS_TOL at constraint 'AGTILLSTART(US.TxTranspec.Cropland.Zero)'. CPLEX Warning 1045: Detected nonzero \u003c= CPX_CANCEL_TOL at constraint 'WELFAR', variable 'AGDEMANDS(2015.US.dom_demand.GrpfrtFrsh_White_Fla.2)'. CPLEX Warning 1045: Detected nonzero \u003c= CPX_CANCEL_TOL at constraint 'WELFAR', variable 'AGDEMANDS(2015.US.dom_demand.GrpfrtFrsh_White_Fla.18)'. CPLEX Warning 1045: Detected nonzero \u003c= CPX_CANCEL_TOL at constraint 'WELFAR', variable 'AGDEMANDS(2015.US.dom_demand.GrpfrtFrsh_White_Fla.26)'. CPLEX Warning 1045: Detected nonzero \u003c= CPX_CANCEL_TOL at constraint 'WELFAR', variable 'AGDEMANDS(2015.US.dom_demand.GrpfrtFrsh_White_Fla.41)'. CPLEX Warning 1045: Detected nonzero \u003c= CPX_CANCEL_TOL at constraint 'WELFAR', variable 'AGDEMANDS(2015.US.dom_demand.GrpfrtFrsh_White_Fla.45)'. CPLEX Warning 1045: Detected nonzero \u003c= CPX_CANCEL_TOL at constraint 'WELFAR', variable 'AGDEMANDS(2015.US.dom_demand.GrpfrtFrsh_White_Fla.49)'. CPLEX Warning 1045: Detected nonzero \u003c= CPX_CANCEL_TOL at constraint 'WELFAR', variable 'AGDEMANDS(2015.US.dom_demand.GrpfrtFrsh_White_Fla.60)'. CPLEX Warning 1045: Detected nonzero \u003c= CPX_CANCEL_TOL at constraint 'WELFAR', variable 'AGDEMANDS(2015.US.dom_demand.GrpfrtFrsh_White_Fla.71)'. CPLEX Warning 1045: Detected nonzero \u003c= CPX_CANCEL_TOL at constraint 'WELFAR', variable 'AGDEMANDS(2015.US.dom_demand.GrpfrtFrsh_Red_Tex.2)'. CPLEX Warning 1045: Detected nonzero \u003c= CPX_CANCEL_TOL at constraint 'WELFAR', variable 'AGDEMANDS(2015.US.dom_demand.GrpfrtFrsh_Red_Tex.18)'. CPLEX Warning 1045: Too many warnings of this type have been detected. All further warnings of this type will be ignored. CPLEX Warning 1047: Decimal part of coefficients in constraint 'WELFAR' are fractions and can be scaled with 17/1. CPLEX Warning 1047: Decimal part of coefficients in constraint 'AGPRODBAL(2015.US.CB.RefSugar.base)' are fractions and can be scaled with 47/1. CPLEX Warning 1047: Decimal part of coefficients in constraint 'AGPRODBAL(2015.US.CB.CornforDairyCattle.base)' are fractions and can be scaled with 70/1. CPLEX Warning 1047: Decimal part of coefficients in constraint 'AGPRODBAL(2015.US.GP.Oats.base)' are fractions and can be scaled with 71/1. CPLEX Warning 1047: Decimal part of coefficients in constraint 'AGPRODBAL(2015.US.GP.Canola.base)' are fractions and can be scaled with 11/1. CPLEX Warning 1047: Decimal part of coefficients in constraint 'AGPRODBAL(2015.US.GP.RefSugar.base)' are fractions and can be scaled with 47/1. CPLEX Warning 1047: Decimal part of coefficients in constraint 'AGPRODBAL(2015.US.GP.CornforDairyCattle.base)' are fractions and can be scaled with 70/1. CPLEX Warning 1047: Decimal part of coefficients in constraint 'AGPRODBAL(2015.US.LS.Oats.base)' are fractions and can be scaled with 71/1. CPLEX Warning 1047: Decimal part of coefficients in constraint 'AGPRODBAL(2015.US.LS.RefSugar.base)' are fractions and can be scaled with 47/1. CPLEX Warning 1047: Decimal part of coefficients in constraint 'AGPRODBAL(2015.US.LS.CornforDairyCattle.base)' are fractions and can be scaled with 70/1. CPLEX Warning 1047: Too many warnings of this type have been detected. All further warnings of this type will be ignored. CPLEX Warning 1048: Detected constraint with wide range of coefficients. In constraint 'WELFAR' the ratio of largest and smallest (in absolute value) coefficients is 4.48036e+020. CPLEX Warning 1048: Detected constraint with wide range of coefficients. In constraint 'AGPRODBAL(2015.US.CB.Hay.base)' the ratio of largest and smallest (in absolute value) coefficients is 100000. CPLEX Warning 1048: Detected constraint with wide range of coefficients. In constraint 'AGPRODBAL(2015.US.CB.CottonseedMeal.base)' the ratio of largest and smallest (in absolute value) coefficients is 100000. CPLEX Warning 1048: Detected constraint with wide range of coefficients. In constraint 'AGPRODBAL(2015.US.CB.CottonseedOil.base)' the ratio of largest and smallest (in absolute value) coefficients is 776300. CPLEX Warning 1048: Detected constraint with wide range of coefficients. In constraint 'AGPRODBAL(2015.US.GP.Hay.base)' the ratio of largest and smallest (in absolute value) coefficients is 100000. CPLEX Warning 1048: Detected constraint with wide range of coefficients. In constraint 'AGPRODBAL(2015.US.GP.CottonseedMeal.base)' the ratio of largest and smallest (in absolute value) coefficients is 100000. CPLEX Warning 1048: Detected constraint with wide range of coefficients. In constraint 'AGPRODBAL(2015.US.GP.CottonseedOil.base)' the ratio of largest and smallest (in absolute value) coefficients is 776300. CPLEX Warning 1048: Detected constraint with wide range of coefficients. In constraint 'AGPRODBAL(2015.US.LS.Silage.base)' the ratio of largest and smallest (in absolute value) coefficients is 101350. CPLEX Warning 1048: Detected constraint with wide range of coefficients. In constraint 'AGPRODBAL(2015.US.LS.CottonseedOil.base)' the ratio of largest and smallest (in absolute value) coefficients is 776300. CPLEX Warning 1048: Detected constraint with wide range of coefficients. In constraint 'AGPRODBAL(2015.US.NE.CottonseedOil.base)' the ratio of largest and smallest (in absolute value) coefficients is 776300. CPLEX Warning 1048: Too many warnings of this type have been detected. All further warnings of this type will be ignored Across these, not surprisingly since I wrote it, I prefer the GAMSCHK where BLOCKLIST and MATCHIT tell you where the big and small numbers are and then I use targeted displays (through DISPLAYCR) to look at things. The CPLEX information backed by GAMSCHK DISPLAYCR also looks god but is not available for other solver.Note when using the GAMS internal scaling feature the all of the above mentioned procedures report out the coefficients after the GAMS internal scaling has been applied. The above GAMSCHK features are implemented at the bottom of the scale2.gms example and LIMROW is set large enough in there to display the model.\nfrom Bruce McCarl\u0026rsquo;s GAMS Newsletter No 41 , July 2017\nArchive of all Newsletters ","excerpt":"\u003cp\u003e\u003cstrong\u003eSometimes models do odd things like reporting problems as infeasible, stuck or falsely optimal when scaling is the real issue. To avoid this or correct such issues it is often desirable to check scaling and in turn rescale the model or ask the solvers to employ more aggressive scaling.\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eIn terms of solver scaling most LP/MIP solvers do automatic scaling and a number have the option to apply a more aggressive scaling to numerically difficult models, e.g. Cplex option scanind, Gurobi option scaleflag, or Xpress option scaling. However, modelers can typically do better because they know how certain variables and equations are interrelated and can scale with common factors while the solvers do not have that knowledge. Generally, for difficult problems it is desirable to do the scaling as discussed below then let the solver scale as well. Scaling is typically not a concern for small problems.\u003c/p\u003e","ref":"/blog/2017/08/scaling/","title":"Scaling"},{"body":"I was talking to the GAMS staff and they informed me as to what their most common supportcalls involve. One of them involves fixing bad results. In this and the next couple of newsletters. I will cover how to diagnose problems within models that don’t work right. Also in the next section I will cover an integer programming issue that commonly comes up.\nYears ago I started writing a book length item called - So Your GAMS Model Didn’t WorkRight: A Guide to Model Repair but life got busy and I never finished it. (If someone wanted tohelp finish it please contact me). Here I will adapt material in it on infeasible models forpresentation here. So here we go.\nInfeasibility is always a possible outcome when solving models. Simplex based linearprogramming solvers handle infeasibility through a two or three step solution approach. First,there may be some presolution calculations which may determine a model cannot be madefeasible (as done for example by the CPLEX and CONOPT PRESOLVEs). Second, there isusually a Phase I operation wherein the sum of a set of implicitly added artificial variables isminimized. During this phase, the problem is artificially rendered feasible. Third, if theartificial variable values are all driven to zero, then the problem is declared feasible and thesolver turns to the real objective function and proceeds toward optimally.\nHowever, the problem may be declared infeasible by the presolve or the Phase I. In such cases, the information content of the output differs between solvers and may not be very helpful.\nCauses of Infeasible Models Causes of infeasibility are not always easily identified. Solvers may report a particular equation as infeasible in cases where an entirely different equation is the cause. Consider the following example,\nMax 50 x1 +50 x2 x1 + x2 \u0026le; 50 50 x1 + x2 \u0026le; 65 x1 \u0026ge; 20 x1, x2 \u0026ge; 0 In this example, the interaction between the constraint x1 ≥ 20, the constraint immediately above it, and the nonnegativity condition on x2 render the model infeasible while the first constraint has nothing to do with the infeasibility. There may be several potential explanations as to why the infeasibility is present. The 65 on the right hand side of the second constraint may be a data entry error, perhaps a number in excess of 1000 was intended. Similarly, the 50 coefficient multiplying x1 in the second constraint may be an error with a number more like 0.50 or a negative entry intended. Third, the limit requiring x1 ≥ 20 may be misspecified with the RHS really intended to be 0.20. Fourth, perhaps the X2 variable should have been allowed to be negative. Fifth, there could be multiple errors involving several of the above cases. Runs with CPLEX, BDMLP and MINOS5 resulted in the marking of either the x1 ≥ 20 or the second constraint as the infeasible item. This may or may not be a proper identification of the problem causing mistake.\nThis illustrates a general point that infeasibilities occur because of the interaction of multiple balance on variables and the restrictions imposed by equations. In more complex models a larger set of constraints and bounds could be involved and there may be thousands of other variable bounds and constraints that have nothing to do with the infeasibility. Thus, we need procedures to identify the infeasibility causing set of variable and equation restrictions. In turn then we can look for the root cause of the infeasibility in that narrowed down set.\nFinding Causes of Infeasibility \u0026ndash; Basic Theory There are two approaches I recommend for finding the set of infeasibility causing equations. The first approach relies on “artificial” variables and the second involves use of infeasibility finders that appear in a few solvers (i.e. the CPLEX conflict refiner, the IIS as in BARON, GUROBI and XPRESS; and the feasibility relaxation in CPLEX and GUROBI). I will only cover artificial variables here as their usage works with all solvers.\nArtificial variables are covered in virtually every introductory linear programming course or book. An artificial is an added variable that did not appear in the original model which is structured to make sure that the where it is inserted can all always be satisfied. Additionally, the model objective function is modified to provide a strong incentive to drive the artificial variables to zero. There are two ways such incentives are entered: through the “Big M” penalty method or through the “Phase I/ Phase II” optimization approach. Here we only cover the Big M method. In that case we augment the above model with an artificial variable as follows\nMax 50 x1 + 50 x2 -1000000 A x1 + x2 \u0026le; 50 50 x1 + x2 \u0026le; 65 x1 + A \u0026ge; 20 x1, x2, A \u0026ge; 0 with A being the artificial variable and the -1000000 being the objective function penalty. Here we only add one such variable but more generally artificials would be entered for each model equation which is not be satisfied when all decision variables equal zero (i.e., when x1 = 0 the x1 ≥ 20 constraint is not satisfied). In general, the added artificials would have a large, undesirable objective function coefficient (the so called “Big M value”) and an entry in an associated potentially infeasible equation.\nIdentifying Infeasible Causes Now let us go introduce the procedure for identifying the set of infeasibility causing constraints and variable bounds. This is done by solving the problem with artificials added and then using the solution information to identify the infeasibility causing equation and variable bound set. Now we illustrate the procedure using the example.\nA GAMS formulation of the above problem after including the artificial is\nvariable objmax;\npositive variables x1, x2, A\nequations obj,r1,r2,r3;\nobj.. objmax =e= 50*x1 +50*x2-1000000*A ;\nr1.. x1 + x2 =L= 50;\nr2.. 50*x1 + x2 =L= 65;\nr3.. x1 +A =G= 20;\nmodel infe /all/;\nsolve infe using lp maximizing objmax; Note here it is possible that a solver with a presolve can eliminate some equations and in turn possibly not report the proper shadow prices. (when using a solver with a presolve one might suppress it, for example using the CPLEX option preind 0 or the XPRESS option presolve 0).\nThe resultant relevant part of the Big M solution is\n**** SOLVER STATUS 1 Normal Completion **** MODEL STATUS 1 Optimal **** OBJECTIVE VALUE -18699935.0000 LOWER LEVEL UPPER MARGINAL ---- EQU obj . . . 1.000 ---- EQU r1 -INF 1.300 50.000 . ---- EQU r2 -INF 65.000 65.000 20001.000 ---- EQU r3 20.000 20.000 +INF -1.000E+6 LOWER LEVEL UPPER MARGINAL ---- VAR objmax -INF -1.870E+7 +INF . ---- VAR x1 . 1.300 +INF . ---- VAR x2 . . +INF -1.995E+4 ---- VAR A . 18.700 +INF . Where note the marginals on r2, r3 and x2 are large.\nThe question then is: So what? When a linear program is solved, the optimum solution contains items which are influenced by the objective function parameters for the non-zero variables. In particular the GAMS marginals or more classically, the shadow prices, reduced costs and objective function value.\nSo in this case with the artificial is in the basis, then relaxation of some of the right hand sides will cause that artificial to become smaller and they will have artificially large shadow prices. Similarly the reduced costs on some variables will be influenced by the presence of the artificial indicating that the artificial would get smaller if that variable could go below its lower bound (going negative in this case) or above its upper. Note this is the case for the marginals on r2, r3 and x2 which jointly indicates equations r1, and r2 plus x2\u0026gt;0 is the infeasibility causing set.\nGeneral procedure for finding the infeasibility causing set The following gives the steps for finding infeasibility causes using the Big M, artificial variable approach.\nStep 1 Identify the relevant equations and/or variable bounds for which artificials are needed to be added (details about this in next section)\nStep 2 Add artificial variables to those equations and bounds. These artificials each have a Big M penalty in the objective function and an entry in a single constraint.\nStep 3 Solve the model\nStep 4 Examine the model solution. Where the marginals (the reduced costs for the variables and the shadow prices for the equations) are distorted by the presence of the artificials, identify those as the variables and equations to be examined for the cause of infeasibility (note once identified then the modeler needs to examine those in the context of the problem hopefully finding the issue).\nStep 5 Fix the model and repeat the process if needed\nThere are several questions inherent in the above procedure. In particular: Where should artificial variables be added? How should the artificial variables be structured? and How does one find a “distorted” marginal? Each is discussed below.\nWhere Should one add Artificial Variables? The places where artificial variables should be added can be determined in several ways. One could look at the model solution and enter artificials in the equations and/or variable bounds marked by the solver as infeasible. However, while this sometimes points to proper places, it does not always do such. The approach advocated here is to add artificials in all possible infeasible locations although a different approach is in order for newly modified models as discussed below.\nProgramming models will only be infeasible when setting all the decision variables equal to zero is not feasible. This occurs when: a) the interval between variable upper and lower bounds does not include zero; or b) equations appear which are not satisfied when all variables are set to zero.\nThe equation cases which are not satisfied when the variables are set equal to zero are:\nLess than or equal to constraints with a negative right side i.e. x ≤ -1 Greater than or equal to constraints with a positive right side. i.e. x ≥1 Equality constraints with a nonzero right side i.e. x = 1 or x = -1 Additionally, when the interval between a variable’s lower and upper bounds does not include zero then those bounds need to be converted to constraints with artificials. This will occur when:\nThe lower bound is positive, or The upper bound is negative The ADVISORY and NONOPT \u0026ndash; IDENTIFY procedures in GAMSCHK have been written to create a list of all occurrences of these five cases.\nAdding artificials in newly modified models When a model that was feasible before has been newly modified and goes infeasible this raises a different artificial adding procedure. Namely, one can just add the artificials into the newly added constraints and/or bounds. Therein one may need to add artificials to newly added constraints or bounds that in fact are satisfied when all the variables are set to zero. This arises because the new constraints are possible members of the infeasibility causing set through their interaction with other constraints that previously could be satisfied. Here artificials would be added to the new =L= constraints with positive right hand sides or new =G= constraints with a negative right hand side. The exact structure these artificials follows the rules given in the next section.\nEntering Artificial Variables in GAMS Once one has found where to add the artificial variables one still has to address: How they should be added? and What should they look like? I recommend using the following general rules address this question.\nIn general a new GAMS variable that is specified to be positive should be defined for each identified potential infeasibility causing equation. This variable should have the same dimension as does the equation. Thus, if artificials are added to RESOUREQ(PLANT,RESOURCE), then the artificial should look like ARTRESOURQ(PLANT,RESOURCE). Artificials should be entered on the left-hand side of the equation with a coefficient of +1 in =L= equations and -1 in =G= equations. When the equation is an =E= the artificial should be a +1 if the equation right hand side is positive and -1 if it is negative. Note, one cannot add an artificial into variable bounds which are defined using LO, .UP or .FX syntax. One needs to convert these to =G=, to =L= and =E= equations and then add the artificials following the above rules on signs.\nOne will also need to enter a large objective function penalty for the artificials. The coefficient will be negative when the objective function is maximized and when it is minimized. The magnitude of this penalty is entirely problem dependent and can cause numerical problems in the solver. All that can be said in general is that the penalty should dwarf the other objective function coefficients and should be large enough so that the artificial is driven to zero in any feasible model.\nAlternatively, if numerical problems are plaguing the solution with the artificials entered, one can modify the objective function to one which minimizes some of the artificials. In the above example the simplest way to do this is to multiply the original objective function by zero and convert the penalty on the artificial to something like hundred so that the shadow prices are not extremely small. This means the model becomes\nvariable objmax; positive variables x1, x2, A equations obj,r1,r2,r3; obj.. objmax =e= 0*(50*x1 +50*x2)-100*A ; r1.. x1 + x2 =L= 50; r2.. 50*x1 + x2 =L= 65; r3.. x1 +A =G= 20; model infe /all/; OPTION LP=BDMLP; solve infe using lp maximizing objmax; And yields a solution\n**** OBJECTIVE VALUE -1870.0000 LOWER LEVEL UPPER MARGINAL ---- EQU obj . . . 1.000 ---- EQU r1 -INF 1.300 50.000 . ---- EQU r2 -INF 65.000 65.000 2.000 ---- EQU r3 20.000 20.000 +INF -100.000 LOWER LEVEL UPPER MARGINAL ---- VAR objmax -INF -1870.000 +INF . ---- VAR x1 . 1.300 +INF . ---- VAR x2 . . +INF -2.000 ---- VAR A . 18.700 +INF . Which again identifies the same infeasibility causing set.\nHow Are Distorted Marginals Identified? The next question involves finding the distorted marginals. Under the BIG M method one reviews the output in the GAMS LST file reproduced above looking for marginals that have large absolute values or on our nonzero when working with the case just above where the original objective function was multiplied by zero. However, in models with thousands of variables and equations this information can be widely dispersed and difficult to find. The GAMSCHK procedure NONOPT can do this for you as when run it will automatically list out all items with marginals larger in absolute value than 10 to a filter value that is set through the Gams check option file (MARGFILT) for example adding the following to your code.\n$onEOLcom Modelname.optfile=1; //replace red part with the name of your mode) *write solver option file for gamschk File opt /gamschk.opt/; Put opt $onput Margfilt 1 $offput Putclose; *write gck file telling gamschk what to do File gck /%system.fn%.gck/; Put gck $onput nonopt $offput Putclose; *invoke gamschk as lp solver Option LP=GAMSCHK; Can you find what I did to the model? If you want to play around with this a little bit here is a challenge. I made a version of the gams model library model Egypt (where I lengthened the abominably short choice of parameter and set names so the model was more easily comprehended). In that model I made a few subtle modifications that caused to be infeasible. I have also added in the needed code to add needed artificials (to get them you need to activate the set global statement in first line in the code) plus I added in stuff to start up gamschk (again needing activation by removing the * from column one in the line just before the word gamschk appears in the code). As a hint my modifications involve nutrition and land availability.\nfrom Bruce McCarl\u0026rsquo;s GAMS Newsletter No 40 , May 2017\nArchive of all Newsletters ","excerpt":"\u003cp\u003eI was talking to the GAMS staff and they informed me as to what their most common supportcalls involve. One of them involves fixing bad results. In this and the next couple of newsletters. I will cover how to diagnose problems within models that don’t work right. Also in the next section I will cover an integer programming issue that commonly comes up.\u003c/p\u003e","ref":"/blog/2017/07/misbehaving-model-infeasible/","title":"Misbehaving Model – Infeasible"},{"body":"","excerpt":"","ref":"/index.json","title":""},{"body":"","excerpt":"","ref":"/blog/","title":"The GAMS blog"},{"body":" Area: energy\nProblem class: LP, MIP, MINLP\nOptimizing to combat climate change with carbon capture and storage GAMS at the U.S. Department of Energy The electricity generation sector in the U.S. is a major contributor of CO2 emissions. Thus emissions reductions from this sector will play a central role in any coordinated CO2 emission reduction effort aimed at combating climate change. One technology option that may help the electricity generation sector meet this challenge is carbon capture and storage (CCS). Carbon capture technologies can significantly reduce atmospheric emissions of CO2 from fossil fuel power plants. The captured CO2 is then transported through a network of pipelines and stored safely. A widespread deployment of these technologies is necessary to significantly reduce greenhouse gas emissions and contribute to a clean energy portfolio. But the deployment is both expensive and time-consuming: bringing such technologies online can take industries between 20 and 30 years.\nThe U.S. Department of Energy is using GAMS in two projects aimed at advancing carbon capture technologies. The NETL CO2 Capture, Transport, Utilization and Storage (CTUS) model optimizes potential networks of CO2 pipelines and storage infrastructure. The Carbon Capture Simulation Initiative (CCSI), founded by the U.S. Department of Energy, aims at making carbon capture technologies more easily available for industries. Their Optimization Toolset enables industry to rapidly assess and utilize these new technologies. GAMS is proud to be a part of these projects designed to make carbon capture a success.\nScreenshot of CTUS model user interface Analyzing Co2 transport and storage networks The U.S. Department of Energy uses GAMS to analyze potential CO2 emission reduction scenarios in which CCS may play a role in meeting emission goals. The NETL CO2 Capture, Transport, Utilization and Storage (CTUS) model developed by the DOE National Energy Technology Laboratory is written in GAMS. It optimizes by minimizing the cost of the transport and storage network, via a mixed integer program (MIP), evaluating potential networks of CO2 pipelines and storage infrastructure amenable to handling the transport and storage of captured CO2 from the CCS enabled electricity sector. This type of problem was particularly well-suited for GAMS due to the volume of data processed, the solution methodology, the ability to integrate with other modeling platforms, and the stringent requirements for solve time required of this capability in order to eventually integrate into more holistic energy-economy models.\nThus far, the CTUS model has been integrated into the National Energy Modeling System (NEMS) and is also being integrated into the MARKAL energy model. When integrated into NEMS as the CTUS sub-module, a detailed portrayal of carbon capture and storage in energy economy projections is rendered. Through this capability, cost variability and capacity constraints are introduced into the energy-economy forecast as it considers CCS systems as an option in climate mitigation scenarios. This capability makes possible identification of location and time specific volumes of CO2 transported and stored throughout the projection period. A version of CTUS has been modified and incorporated into the U.S. Energy Information Administration\u0026rsquo;s (EIA\u0026rsquo;s) version of NEMS and is in turn used to produce the Annual Energy Outlook.\nThe CTUS model The CCSI optimization toolset It is the express goal of the Carbon Capture Simulation Initiative to speed up the deployment process of carbon capture technologies. Founded by the U.S. Department of Energy in 2011, CCSI is a partnership among national laboratories, industry and academic institutions. The CCSI optimization toolset helps industry to develop and deploy advanced carbon capture and energy related technologies.\nThe technical and economic performance of a new technology is strongly dependent on its equipment configuration and operating conditions. Thus, to rigorously screen and evaluate new technologies, it is important to ensure that an optimal process is used. The optimization tools identify optimal equipment configurations and operating conditions for potential CO2 capture processes, thereby significantly reducing cost, time and risk involved in the implementation.\nThe CCSI research group has developed two advanced optimization capabilities as part of its Framework for Optimization and Quantification of Uncertainty and Surrogates (FOQUS) tool. Both utilize GAMS as an essential element. The first tool performs simultaneous process optimization and heat integration based on rigorous models. The heat integration subproblem is modeled in GAMS as LPs and MIPs and solved by CPLEX. The other tool optimizes the design and operation of a CO2 capture system. The carbon capture system is represented as a MINLP model, which is implemented in GAMS and solved by DICOPT or BARON. By identifying the optimal configurations and conditions for CO2 capture processes, these CCSI optimization tools allow more effective screening of materials and concepts for future technologies.\nThe CCSI model The CCSI toolset includes Rigorous process models. Framework for Optimization, Quantification of Uncertainty and Surrogates (FOQUS), which enables simulation-based process optimization, heat integration (GAMS), uncertainty quantification, generating algebraic surrogate models and building dynamic reduced models. Advanced superstructure optimization models (GAMS). Advanced process control framework. Data management framework. ","excerpt":"The electricity generation sector in the U.S. is a major contributor of CO2 emissions. Thus emissions reductions from this sector will play a central role in any coordinated CO2 emission reduction effort aimed at combating climate change. One technology option that may help the electricity generation sector meet this challenge is carbon capture and storage (CCS). The U.S. Department of Energy is using GAMS in two projects aimed at advancing carbon capture technologies.","ref":"/stories/doe/","title":"DOE"},{"body":"","excerpt":"","ref":"/about/","title":""},{"body":"CONTACT US GAMS Software GmbH Germany Sales sales@gams.com Technical Support support@gams.com Academic Program academic@gams.com Phone (+49) 221 949-9170 Mail PO Box 4059, 50216 Frechen, Germany VAT-ID DE811975677 GAMS Development Corp USA Sales sales@gams.com Technical Support support@gams.com Academic Program academic@gams.com Phone (+1) 202 342-0180 Mail 2750 Prosperity Ave, Suite 500, Fairfax VA 22031 ","excerpt":"\u003ch2 id=\"contact-us\"\u003eCONTACT US\u003c/h2\u003e\n\u003chr\u003e\n\n\n\n\n\n\n\n\n\u003ctable class=\"table table-striped table-bordered table-hover col-8\"\u003e\n \u003cthead\u003e\n \u003ctr\u003e\n \u003cth\u003eGAMS Software GmbH\u003c/th\u003e\n \u003cth\u003eGermany\u003c/th\u003e\n \u003c/tr\u003e\n \u003c/thead\u003e\n \u003ctbody\u003e\n \u003ctr\u003e\n \u003ctd\u003eSales\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"mailto:sales@gams.com\"\u003esales@gams.com\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eTechnical Support\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"mailto:support@gams.com\"\u003esupport@gams.com\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eAcademic Program\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"mailto:academic@gams.com\"\u003eacademic@gams.com\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003ePhone\u003c/td\u003e\n \u003ctd\u003e(+49) 221 949-9170\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eMail\u003c/td\u003e\n \u003ctd\u003ePO Box 4059, 50216 Frechen, Germany\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eVAT-ID\u003c/td\u003e\n \u003ctd\u003eDE811975677\u003c/td\u003e\n \u003c/tr\u003e\n \u003c/tbody\u003e\n\u003c/table\u003e\n\n\n\u003chr\u003e\n\n\n\n\n\n\n\n\n\u003ctable class=\"table table-striped table-bordered table-hover col-8\"\u003e\n \u003cthead\u003e\n \u003ctr\u003e\n \u003cth\u003eGAMS Development Corp\u003c/th\u003e\n \u003cth\u003eUSA\u003c/th\u003e\n \u003c/tr\u003e\n \u003c/thead\u003e\n \u003ctbody\u003e\n \u003ctr\u003e\n \u003ctd\u003eSales\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"mailto:sales@gams.com\"\u003esales@gams.com\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eTechnical Support\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"mailto:support@gams.com\"\u003esupport@gams.com\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eAcademic Program\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"mailto:academic@gams.com\"\u003eacademic@gams.com\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003ePhone\u003c/td\u003e\n \u003ctd\u003e(+1) 202 342-0180\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eMail\u003c/td\u003e\n \u003ctd\u003e2750 Prosperity Ave, Suite 500, Fairfax VA 22031\u003c/td\u003e\n \u003c/tr\u003e\n \u003c/tbody\u003e\n\u003c/table\u003e","ref":"/contact/","title":""},{"body":"","excerpt":"","ref":"/ssi_exports/footer/","title":""},{"body":"","excerpt":"","ref":"/ssi_exports/footer_miro/","title":""},{"body":"","excerpt":"","ref":"/ssi_exports/navbar_compact/","title":""},{"body":"","excerpt":"","ref":"/ssi_exports/navbar_full/","title":""},{"body":"","excerpt":"","ref":"/ssi_exports/wbody/","title":""},{"body":"","excerpt":"","ref":"/ssi_exports/wheader/","title":""},{"body":" About GAMS GAMS is one of the leading tool providers for the optimization industry, with offices in the US and Germany. With customers in more than 120 countries, GAMS is used by multinational companies, universities, research institutions and governments in many different areas, including the energy and chemical industries, for economic modeling, agricultural planning, or manufacturing. A Pioneer of Optimization Started as a project at the World Bank by an economic modeling group in the 1970s, GAMS was the first software system to combine the language of mathematical algebra with traditional concepts of computer programming in order to efficiently describe and solve optimization problems. Nowadays, algebraic modeling is considered to be the most productive way of implementing optimization models and decomposition methods for optimization problems. Our Mission At GAMS we care about algebraic modeling and optimization. We want to provide the best possible optimization software to make life easy for our customers. We innovate responsibly, and guarantee long term compatibility of our products with customer model code. A Long History with a Young Management Team GAMS became a commercial product in November 1987, when the GAMS Development Corporation was founded in Washington, D.C. by Alexander Meeraus, Richard C. Price, and Gary Kutcher. In November 1995 Alexander Meeraus and Franz Nelissen founded GAMS Software GmbH. Our current management board consists of Dr. Michael Bussieck, Dr. Franz Nelissen, and Dr. Steven Dirkse. Michael has a PhD in Mathematics from Technical University of Braunschweig, Germany. After working at GAMS Development in Washington as a senior optimization analyst, he became a managing partner of GAMS Software GmbH in 2004. In addition to his development responsibilities, he frequently engages in customer optimization projects that deliver cutting-edge optimization technology to clients from all industries. Franz holds a PhD in Agriculture from the University of Giessen, Germany. Franz started working for GAMS in 1995 and was appointed to GAMS' Development Board of Directors in 2010. Over the course of his career, Franz has worked with customers in government, academia and commercial markets, has consulted on optimization projects worldwide, and is responsible for business development and international project management. Steven received his PhD in Computer Science from UW-Madison in 1994. After a year of teaching math and CS at Calvin College, he joined the staff at GAMS Development in 1995, becoming Director of Optimization in 2003 and President in 2016. His primary focus has been in software development, notably solvers and solver links, data utilities, multi-threading, and quality control and performance testing. ","excerpt":"\u003csection\u003e\n \n \u003cdiv class=\"full-width \"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid\"\u003e\n \u003cdiv class=\"container text-center\"\u003e\n \n \u003ch1 class=\"display-1\"\u003eAbout GAMS\u003c/h1\u003e\n \u003cp class=\"lead\"\u003eGAMS is one of the leading tool providers for the optimization industry, \n with offices in the US and Germany. With customers in more than 120 countries, GAMS is \n used by multinational companies, universities, research institutions and governments \n in many different areas, including the energy and chemical industries, for economic \n modeling, agricultural planning, or manufacturing.\n \u003c/p\u003e","ref":"/about/company/","title":"About Pages"},{"body":"General information Community licenses are for non-commercial, non-production use in an academic setting. You can find more information about licensing here .\nThere are two types of licenses, which can be generated in the academic user portal:\nLocal licenses are meant for installation on a PC or laptop. Up to two different computers can be used, and the GAMS installation will be locked to those computers. No internet connection is required when using GAMS. Network licenses are meant for use in docker, virtual machines, or similar settings. You can use up to two concurrent nodes with a network license. The machines always have to be connected to the internet. How to generate and install your license This video shows how to register for a community license and install it on Windows. The process is basically the same on a Mac. If you prefer written instructions, keep reading below the video.\nIf you have not done so yet, signup for a free account at https://academic.gams.com using your institutional email address.\nDownload a GAMS installation file at https://gams.com/download/ for your operating system and install GAMS.\nMake sure to update your name in the user portal, so it can be included in your license.\nGo to the portal dashboard , click on the correct \u0026ldquo;get your free community license\u0026rdquo; button (either node or network), and then hit the orange \u0026ldquo;Generate License\u0026rdquo; button\nYou will then see your personal license in the Licenses section\nCopy your Access Code into your clipboard\nStart the GAMS Studio application on your laptop or PC, and inside the application go to HELP \u0026gt; GAMS Licensing.\nClick on the white box that says \u0026ldquo;Access code\u0026rdquo; and paste your access code.\nClick on \u0026ldquo;Install License\u0026rdquo;. A message will appear asking if you want to \u0026ldquo;create a license file based on the selected license\u0026rdquo;. Click \u0026ldquo;Yes\u0026rdquo;.\nYour new Community License should be installed and visible. Click \u0026ldquo;Ok\u0026rdquo;. You are now ready to starting using GAMS Studio.\n","excerpt":"\u003ch2 id=\"general-information\"\u003eGeneral information\u003c/h2\u003e\n\u003cp\u003eCommunity licenses are for \u003cstrong\u003enon-commercial, non-production use in an academic setting\u003c/strong\u003e.\nYou can find more information about licensing \u003ca href=\"/latest/docs/UG_License.html#UG_License_Additional_Solver_Limits\" target=\"_blank\"\u003ehere\u003c/a\u003e\n.\u003c/p\u003e\n\u003cp\u003eThere are two types of licenses, which can be generated in the academic user portal:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003e\u003cstrong\u003eLocal licenses\u003c/strong\u003e are meant for installation on a PC or laptop. Up to two different computers can be used, and the GAMS installation will be locked to those computers. No internet connection is required when using GAMS.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eNetwork licenses\u003c/strong\u003e are meant for use in docker, virtual machines, or similar settings. You can use up to two concurrent nodes with a network license. The machines always have to be connected to the internet.\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch2 id=\"how-to-generate-and-install-your-license\"\u003eHow to generate and install your license\u003c/h2\u003e\n\u003cp\u003eThis video shows how to register for a community license and install it on Windows.\nThe process is basically the same on a Mac.\nIf you prefer written instructions, keep reading below the video.\u003c/p\u003e","ref":"/trygams/portal_instructions/","title":"Academic License Portal Instructions"},{"body":"","excerpt":"","ref":"/team/achristensen/","title":"Adam"},{"body":"GAMS Advisory Board The board advises GAMS Development and GAMS Software, focussing on scientific, technical, product, and business development matters. The members of the board help GAMS allocate their resources to develop products, features, and delivery channels that best serve our current and future customers, and ensure the continued growth of GAMS.\nBoard members:\nMichael Ferris (chair), University of Wisconsin-Madison, USA Laurent Drouet, CMCC, Italy Thorsten Koch, Zuse Institute Berlin, Germany Ricardo Lima, King Abdullah University Thuwal, Saudi Arabia Todd Munson, Argonne National Laboratory, USA Thomas F Rutherford, University of Wisconsin-Madison, USA Nikolaos Sahinidis, Georgia Tech, Atlanta, USA Dimitri Tomanos, ENGIE Impact Brussels, Belgium Golbon Zakeri, University of Massachusetts Amherst, USA We thank our former advisory board members:\nAbhijit Bora, PROS, USA Josef Kallrath Bruce McCarl, Texas A\u0026amp;M University, USA Alexander Meeraus Ruth Misener, Imperial College London, UK ","excerpt":"\u003ch1 id=\"gams-advisory-board\"\u003eGAMS Advisory Board\u003c/h1\u003e\n\u003cp\u003eThe board advises GAMS Development and GAMS Software, focussing on scientific, technical, product, and business development matters. The members of the board help GAMS allocate their resources to develop products, features, and delivery channels that best serve our current and future customers, and ensure the continued growth of GAMS.\u003c/p\u003e","ref":"/about/advisory/","title":"Advisory Board"},{"body":"","excerpt":"","ref":"/team/aalqershi/","title":"Ahmed"},{"body":"","excerpt":"","ref":"/team/aileen/","title":"Aileen"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/afust/","title":"Alex"},{"body":"André Schnabel Summary of Qualifications Dr. André Schnabel has a PhD in economics from the Leibniz University Hanover, Germany. Besides his studies in computer science and research on project scheduling he developed and sold one of the first non-official clones of Minecraft for mobile devices back in 2011 and did contract-based work as an external software developer. His current interests include applying machine learning methods for building algorithmic performance models and functional programming languages.\nProfessional Profile 2021 – today Operations Research Analyst, GAMS Software GmbH, Braunschweig, Germany 2013 – 2021 Scientific Assistant, Institute for Production Management, Leibniz University Hanover 2016 – 2019 External software development 2011 – 2013 Development \u0026amp; sale of a mobile 3D gaming app “Steinkraft” for iOS and Android 2010 – 2010 Student Assistant, Software Quality, Leibniz University Hanover 2009 – 2010 Student Assistant, Data Structures and Algorithms, Leibniz University Hanover Academic Degrees PhD, Economics, Leibniz University Hanover, Germany, 2020. MSc., Computer Science, Leibniz University Hanover, Germany, 2013. BSc., Computer Science, Leibniz University Hanover, Germany, 2011. Publications Journal publications Carolin Kellenbrink, Nicolas Nübel, André Schnabel, Philipp Gilge, Joerg R. Seume, Berend Denkena, Stefan Helber, 2022. \u0026ldquo;A regeneration process chain with an integrated decision support system for individual regeneration processes based on a virtual twin,\u0026rdquo; International Journal of Production Research, Taylor \u0026amp; Francis Journals, vol. 60(13), pages 4137-4158, July. André Schnabel, Carolin Kellenbrink, Stefan Helber, 2018. \u0026ldquo;Profit-oriented scheduling of resource-constrained projects with flexible capacity constraints,\u0026rdquo; Business Research, Springer, German Academic Association for Business Research, vol. 11(2), pages 329-356, September. Proceedings André Schnabel, Carolin Kellenbrink, 2018. Scheduling resource-constrained projects with makespan-dependent revenues and costly overcapacity, in: Proceedings of the 16th International Conference on Project Management and Scheduling , Rome (Italy), pp. 205-208. Working papers Insa Südbeck, Julia Mindlina, André Schnabel, Stefan Helber, 2022. \u0026ldquo;Using Recurrent Neural Networks for the Performance Analysis and Optimization of Stochastic Milkrun-Supplied Flow Lines,\u0026rdquo; Hannover Economic Papers (HEP) dp-703, Leibniz Universität Hannover, Wirtschaftswissenschaftliche Fakultät. André Schnabel, Carolin Kellenbrink, Stefan Helber, 2017.\nProfit-oriented scheduling of resource-constrained projects with flexible capacity constraints, Diskussionspapier Nr. 593 der Wirtschaftswissenschaftlichen Fakultät der Leibniz Universität Hannover. Thesis Dissertation/PhD Thesis\nHeuristiken für die gewinnorientierte Planung ressourcenbeschränkter Projekte mit erweiterbaren Kapazitäten Master’s Thesis\nEntwicklung einer Heuristik für den Testbedarf von Open Source Softwareprojekten auf einer Social Coding Site Bachelor’s Thesis\nVisualisierung der Ausbreitung von Informationen in einem Digitalen Sozialen Netzwerk ","excerpt":"\u003ch1 id=\"andré-schnabel\"\u003eAndré Schnabel\u003c/h1\u003e\n\u003cdiv class=\"container\"\u003e\n\u003cdiv class=\"row\"\u003e\n\u003cdiv class=\"col-md-3\"\u003e\n\u003cimg class=\"mb-3\" src=\"aschnabel-profile-picture.jpg\" width=\"100%\"\u003e\n\u003c/div\u003e\n\u003cdiv class=\"col-md-8\"\u003e\n\u003ch2 id=\"summary-of-qualifications\"\u003eSummary of Qualifications\u003c/h2\u003e\n\u003cp\u003eDr. André Schnabel has a PhD in economics from the Leibniz University Hanover, Germany. Besides his studies in computer science and research on project scheduling he developed and sold one of the first non-official clones of Minecraft for mobile devices back in 2011 and did contract-based work as an external software developer. His current interests include applying machine learning methods for building algorithmic performance models and functional programming languages.\u003c/p\u003e","ref":"/team/aschnabel/","title":"André"},{"body":"","excerpt":"","ref":"/team/akrewel/","title":"Anne"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/adar/","title":"Arfa"},{"body":"Assumptions CONOPT is based on the usual NLP model in which all variables are continuous and all constraints are smooth with smooth first derivatives. In addition, the Jacobian (the matrix of first derivatives) is assumed to be sparse. CONOPT attempts to find a local optimum satisfying the usual Karish-Kuhn-Tucker optimality conditions.\nThe nonlinear functions defining the model and their analytic derivatives are assumed to be computable with high accuracy.\n2nd derivatives are needed in some components of CONOPT and models with many degrees of freedom can only be solved efficiently if 2nd derivatives are available.\nModels are assumed to be well scaled. CONOPT has an automatic scaling option, but nonlinear models are hard to scale automatically and a good user scaling is often crucial for large models.\nWarnings Models with discrete variables cannot be solved by CONOPT. Some modeling systems such as AIMMS, AMPL, and GAMS and the LINDO API provide a system around CONOPT (a Branch \u0026amp; Bound or an Outer Approximation algorithm) that can handle discrete variables.\nModels with non-differentiable functions may be submitted to CONOPT, but CONOPT will become less reliable and it may terminate in a point that is not a local optimum.\nDense models can also be solved with CONOPT, but computing time may be slightly higher than for algorithms using dense linear algebra.\nCONOPT will usually not work well with noisy functions. In particular, nonlinear functions based on iterative solution of sub-models or numerical integration of differential equations will usually create problems for CONOPT. Derivatives computed with numerical differences are usually not sufficiently accurate.\nCONOPT cannot guarantee that the solution is the global optimum. The user must be familiar with the theory of local vs. global solutions and judge for himself. When models have multiple local optima or local minima for the sum of infeasibility objective them CONOPT may terminate in any of these points.\n","excerpt":"\u003ch1 id=\"assumptions\"\u003eAssumptions\u003c/h1\u003e\n\u003cp\u003eCONOPT is based on the usual NLP model in which all variables are continuous and all constraints are smooth with smooth first derivatives. In addition, the Jacobian (the matrix of first derivatives) is assumed to be sparse. CONOPT attempts to find a local optimum satisfying the usual Karish-Kuhn-Tucker optimality conditions.\u003c/p\u003e","ref":"/products/conopt/assumptions/","title":"Assumptions"},{"body":"","excerpt":"","ref":"/team/abhosekar/","title":"Atharv"},{"body":"Available forms CONOPT is available as an integrated or optional solver with many of the modern modeling systems and it is available as a Fortran Subroutine library.\nWe recommend that you always consider using CONOPT with a Modeling System. It is much easier and more reliable, especially during model development. Due to higher volume and lower support requirements the modeling system versions of CONOPT are also cheaper. Additional information on CONOPT for the various modeling systems is available directly from the vendors of these systems at the following addresses (given in alphabetical order):\nThe AIMMS Modeling System is available from AIMMS B.V.: http://www.aimms.com/ , email: info@aimms.com . The AMPL Modeling system is available from AMPL Optimization LLC: http://www.ampl.com , email: info@ampl.com . The GAMS modeling system is available from GAMS Development Corp. and from GAMS Software GmbH: http://www.gams.com/ , email: sales@gams.com . The LINDO API interface are available from LINDO Systems, Inc.: http://www.lindo.com/ , email: sales@lindo.com . The versions of CONOPT available with the modeling systems are all designed to follow the conventions of the modeling system and they are optimized for the particular model format of each modeling system. Most modeling systems and their CONOPT solver are available on a number of hardware platforms.\n","excerpt":"\u003ch1 id=\"available-forms\"\u003eAvailable forms\u003c/h1\u003e\n\u003cp\u003eCONOPT is available as an integrated or optional solver with many of the modern modeling systems and it is available as a Fortran Subroutine library.\u003c/p\u003e\n\u003cp\u003eWe recommend that you always consider using CONOPT with a Modeling System. It is much easier and more reliable, especially during model development. Due to higher volume and lower support requirements the modeling system versions of CONOPT are also cheaper. Additional information on CONOPT for the various modeling systems is available directly from the vendors of these systems at the following addresses (given in alphabetical order):\u003c/p\u003e","ref":"/products/conopt/forms/","title":"Available Forms"},{"body":"","excerpt":"","ref":"/team/bbrolet/","title":"Baudouin"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/busul/","title":"Burak"},{"body":"Clemens\u0026rsquo; Python Page hdf2gdx/gdx2hdf Use the python modules hdf2gdx and gdx2hdf to convert a hdf file to gdx file and the other way around. At present only hdf tables are supported and can be converted from hdf to gdx. Requirements: Python 2.6, numpy, PyTables, gdxcc module for Python that comes with GAMS.\nSee the manual that comes with the download for further informations.\nDownload: python_hdf_gdx.zip jacobian.py Jacobian.py is a little script that maps the equations and variables of a jacobian matrix \u0026ldquo;A\u0026rdquo; to informations found in a dictionary file. Requirements: Python 2.6, gdxcc module for Python that comes with GAMS.\nSee the readme.txt that comes with the download for further informations.\nDownload: python_jacobian.zip ","excerpt":"\u003ch1 id=\"clemens-python-page\"\u003eClemens\u0026rsquo; Python Page\u003c/h1\u003e\n\u003ch3 id=\"hdf2gdxgdx2hdf\"\u003ehdf2gdx/gdx2hdf\u003c/h3\u003e\n\u003cp\u003eUse the python modules hdf2gdx and gdx2hdf to convert a hdf file to gdx file and the other way around. At present only hdf tables are supported and can be converted from hdf to gdx. Requirements: Python 2.6, numpy, PyTables, gdxcc module for Python that comes with GAMS.\u003c/p\u003e","ref":"/team/cwestphal/","title":"Clemens"},{"body":"","excerpt":"","ref":"/community/","title":"Communities"},{"body":"GAMS Software GmbH P.O. Box 4059, 50216 Frechen, Germany Tel: +49 221 949-9170 E-mail sales@gams.com HRB 32878 Amtsgericht Koeln Geschaeftsfuehrer: Dr. Michael Bussieck \u0026amp; Dr. Franz Nelissen\n","excerpt":"\u003cp\u003eGAMS Software GmbH\nP.O. Box 4059, 50216 Frechen, Germany\nTel: +49 221 949-9170\nE-mail \u003ca href=\"mailto:sales@gams.com\"\u003esales@gams.com\u003c/a\u003e\n\nHRB 32878 Amtsgericht Koeln\nGeschaeftsfuehrer: Dr. Michael Bussieck \u0026amp; Dr. Franz Nelissen\u003c/p\u003e","ref":"/products/conopt/contact/","title":"CONOPT Contact Information"},{"body":"CONOPT is a solver for large-scale nonlinear optimization (NLP) originally developed by ARKI Consulting \u0026amp; Development A/S in Bagsvaerd, Denmark. In 2024 GAMS Software GmbH acquired CONOPT.\nThe CONOPT Algorithm Assumptions Available forms Versions of CONOPT Documentation ","excerpt":"\u003cp\u003eCONOPT is a solver for large-scale nonlinear optimization (NLP) originally developed by ARKI Consulting \u0026amp; Development A/S in Bagsvaerd, Denmark. In 2024 GAMS Software GmbH acquired CONOPT.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"./algorithm\"\u003eThe CONOPT Algorithm\u003c/a\u003e\n\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"./assumptions\"\u003eAssumptions\u003c/a\u003e\n\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"./forms\"\u003eAvailable forms\u003c/a\u003e\n\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"./versions\"\u003eVersions of CONOPT\u003c/a\u003e\n\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"./documentation\"\u003eDocumentation \u003c/a\u003e\n\u003c/p\u003e","ref":"/products/conopt/","title":"CONOPT Home Page"},{"body":"Please note: All contributed documentation and software has been moved to the GAMS-WORLD Forum: https://forum.gamsworld.org/viewforum.php?f=16\u0026sid=60f52f2a7983d94c0202a0834f780778 ","excerpt":"\u003ch2 id=\"please-note-all-contributed-documentation-and-software-has-been-moved-to-the-gams-world-forum\"\u003ePlease note: All contributed documentation and software has been moved to the GAMS-WORLD Forum:\u003c/h2\u003e\n\u003cp\u003e\u003ca href=\"https://forum.gamsworld.org/viewforum.php?f=16\u0026amp;sid=60f52f2a7983d94c0202a0834f780778\" target=\"_blank\"\u003ehttps://forum.gamsworld.org/viewforum.php?f=16\u0026sid=60f52f2a7983d94c0202a0834f780778\u003c/a\u003e\n\u003c/p\u003e","ref":"/community/contributed-documentation/","title":"Contributed documentation and software"},{"body":"Background Many real world applications involve complex models containing multiple data sources (include files or gdx files). In order to reproduce the problem, the GAMS support may need the complete model. There may be an issue with confidentiality. A model may contain:\nconfidential data confidential data structures (equation, parameter, variable names) confidential comments which the developer may wish to hide. GAMS has several utilities, which aid in conversion into scalar format and the removal of confidential information. In particular, GAMS/CONVERT is a utility which transforms a GAMS model instance into formats used by other modeling and solution systems.\nGAMS/CONVERT is designed to achieve the following goals:\nPermit users to convert a confidential model into GAMS scalar format so that any identifiable structure is removed. It can then be passed on to others for investigation without confidentiality being lost. A way of sharing GAMS test problems for use with other modeling systems or solvers. GAMS/CONVERT comes free of charge with any licensed GAMS system. Simple Conversion The GAMS/CONVERT utility can transform a single model instance (from a solve statement) into scalar format. The scalar format removes all confidential information (data and identifiable model data structures). To translate a model into a scalar format removing any identifiable modeling constructs, enter:\ngams mymodel.gms [modeltype]=convert where [modeltype] is the model type used (LP, NLP, MIP, etc.). If you have hard coded the solver inside your GAMS model via an option statement like option LP=bdmlp; or need to solve other models before the model instance you want to convert you should not set the [modeltype]=convert command line option, but add the run time option option [modeltype]=convert; in front of the solve statement of the model instance you want to convert. This will generate a scalar model called gams.gms which can be submitted to GAMS support or others without divulging any confidential information. Below is an example output generated from the trnsport model of the GAMS model library using the default settings:\n* LP written by GAMS Convert at 02/08/08 11:45:21 * * Equation counts * Total E G L N X C * 6 1 3 2 0 0 0 * * Variable counts * x b i s1s s2s sc si * Total cont binary integer sos1 sos2 scont sint * 7 7 0 0 0 0 0 0 * FX 0 0 0 0 0 0 0 0 * * Nonzero counts * Total const NL DLL * 19 19 0 0 * * Solve m using LP minimizing x7; * Variables x1,x2,x3,x4,x5,x6,x7; Positive Variables x1,x2,x3,x4,x5,x6; Equations e1,e2,e3,e4,e5,e6; e1.. - 0.225*x1 - 0.153*x2 - 0.162*x3 - 0.225*x4 - 0.162*x5 - 0.126*x6 + x7 =E= 0; e2.. x1 + x2 + x3 =L= 350; e3.. x4 + x5 + x6 =L= 600; e4.. x1 + x4 =G= 325; e5.. x2 + x5 =G= 300; e6.. x3 + x6 =G= 275; * set non default bounds * set non default levels * set non default marginals Model m / all /; m.limrow=0; m.limcol=0; Solve m using LP minimizing x7; Customized Scalar Model Output CONVERT allows customized scalar model output by making use of options. Please consult the CONVERT options in the CONVERT user guide for details.\nTo do so, the user must make a solver option file called convert.opt (or similarly using the GAMS option file naming conventions). As usual, the user must tell GAMS to use this option file, either by specifying\ngams mymodel.gms [modeltype]=convert optfile=1 from the command line, or\nmymodel.optfile = 1; option [modeltype]=convert; before the solve statement in the model.\nCustom scalar model name The default scalar model created is called gams.gms. In order to generate a scalar model with a different name, specify gams mymodel.gms in the convert.opt option file. The resulting output file will be called mymodel.gms.\nTerminate GAMS after solve Models may consist of several solves, whereas the user may wish to obtain the scalar model only from a single particular solve. The CONVERT option terminate causes GAMS to abort once the solve is complete. To do so, specify terminate in the convert.opt option file. GAMS will then terminate after the solve (and resulting scalar model conversion) is completed. Note that it may be necessary to specify different solvers before previous solves and then specify CONVERT as the solver only for the particular solve for which one wishes to obtain a scalar model.\nRenaming the objective variable in the scalar model By default, the scalar variable is called x1, x2, x3, or similar. If the user wishes to rename the objective variable to something more identifiable, the user can do so by specifying ObjVar myobj in the convert.opt option file. The resulting objective variable will then be called myobj. If the user only specifies ObjVar, the default is objvar.\nMapping scalar data structure names to original model names The scalar model uses data structures called x1, x2, x3,... for variables and eq1, eq2, eq3,... for equations. The user may wish to know how scalar data structures are mapped to the original variable and equation names. For example, GAMS support may identify infeasibilities in a particular scalar equation, which the user needs to identify in the original (non-scalar) model. To do so, CONVERT has an option called Dict. Specifying Dict mydict.txt in the convert.opt option file will generate a dictionary file called mydict.txt containing the mapping information. If only Dict is specified, the default file name is dict.txt. A sample dictionary file, obtained from running CONVERT on the transportation model ([[http://www.gams.com/modlib/libhtml/trnsport.htm|trnsport.gms]]) is given below:\nLP written by GAMS Convert at 02/08/08 11:56:03 Equation counts Total E G L N X C 6 1 3 2 0 0 0 Variable counts x b i s1s s2s sc si Total cont binary integer sos1 sos2 scont sint 7 7 0 0 0 0 0 0 FX 0 0 0 0 0 0 0 0 Nonzero counts Total const NL DLL 19 19 0 0 Equations 1 to 6 e1 cost e2 supply(seattle) e3 supply(san-diego) e4 demand(new-york) e5 demand(chicago) e6 demand(topeka) Variables 1 to 7 x1 x(seattle,new-york) x2 x(seattle,chicago) x3 x(seattle,topeka) x4 x(san-diego,new-york) x5 x(san-diego,chicago) x6 x(san-diego,topeka) x7 z Non-disclosure agreement We are happy to sign a non-disclosure agreement (NDA), also known as a confidentiality agreement (CA), in case this is required.\n","excerpt":"\u003ch2 id=\"background\"\u003eBackground\u003c/h2\u003e\n\u003cp\u003eMany real world applications involve complex models containing multiple data sources (include files or gdx files). In order to reproduce the problem, the GAMS support may need the complete model. There may be an issue with \u003cstrong\u003econfidentiality\u003c/strong\u003e. A model may contain:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003econfidential data\u003c/li\u003e\n\u003cli\u003econfidential data structures (equation, parameter, variable names)\u003c/li\u003e\n\u003cli\u003econfidential comments\nwhich the developer may wish to hide.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eGAMS has several utilities, which aid in conversion into scalar format and the removal of confidential information. In particular, \u003ca href=\"/latest/docs/S_CONVERT.html\" target=\"_blank\"\u003eGAMS/CONVERT\u003c/a\u003e\n is a utility which transforms a GAMS model instance into formats used by other modeling and solution systems.\u003c/p\u003e","ref":"/documents/convert/","title":"Conversion of (Confidential) Models For Submission to GAMS Support"},{"body":"Documentation The CONOPT versions available with modeling systems are covered by documentation from the modeling systems vendors.\n","excerpt":"\u003ch1 id=\"documentation\"\u003eDocumentation\u003c/h1\u003e\n\u003cp\u003eThe CONOPT versions available with modeling systems are covered by documentation from the modeling systems vendors.\u003c/p\u003e","ref":"/products/conopt/documentation/","title":"Documentation"},{"body":"","excerpt":"","ref":"/documents/","title":"Documents"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/dadami/","title":"Doriana"},{"body":"","excerpt":"","ref":"/downloadpage/","title":"Downloadpages"},{"body":"","excerpt":"","ref":"/exports/","title":"Exports"},{"body":"Franz Nelissen, PhD Franz Nelissen has a PhD in Agriculture from the University of Giessen, Germany, and is a Certified Project Management Associate (GPM). Franz started working for GAMS in 1995 and was appointed to GAMS\u0026rsquo; Development Board of Directors in 2010. Over the course of his career, Franz has worked with customers in government, academia and commercial markets, has consulted on optimization projects worldwide, and is responsible for business development and international project management.\nFranz is co-founder and managing partner of GAMS Software GmbH, which received the company award of the German Society of Operations Research (GOR) in 2010.\n","excerpt":"\u003ch1 id=\"franz-nelissen-phd\"\u003eFranz Nelissen, PhD\u003c/h1\u003e\n\u003cp\u003eFranz Nelissen has a PhD in Agriculture from the University of Giessen, Germany, and is a Certified Project Management Associate (GPM). Franz started working for GAMS in 1995 and was appointed to GAMS\u0026rsquo; Development Board of Directors in 2010. Over the course of his career, Franz has worked with customers in government, academia and commercial markets, has consulted on optimization projects worldwide, and is responsible for business development and international project management.\u003c/p\u003e","ref":"/team/fnelissen/","title":"Franz"},{"body":" Summary of Qualifications Frederik Fiand has a diploma degree in Financial Mathematics and Mathematical Economics from the Technical University of Braunschweig, Germany, where he also held a position as research associate at the Institute for Mathematical Optimization from 2013 to 2015. Since 2016 he works at GAMS Software GmbH where he engages with users in technical support, delivers cutting-edge optimization technology to clients in customer optimization projects, and coordinates GAMS\u0026rsquo; activities in several interdisciplinary research projects.\nFred frequently gives lectures at universities and renowned international conferences.\nProfessional Profile 2016 – today Operations research Analyst, GAMS Software GmbH, Braunschweig, Germany 2013 – 2015 Research Assoociate, Mathematical Optimization, Technical University Braunschweig, Germany Academic Degrees Dipl. math. oec. Technical University Braunschweig, Germany, 2013. Publications Articles in refereed Journals and Books BEAM-ME: Accelerating Linear Energy Systems Models by a Massively Parallel Interior Point Method, joint work with Thomas Breuer, Michael Bussieck, Karl-Kien Cao, Hans Christian Gils, Ambros Gleixner, Dmitry Khabi, Nils Kempke, Thorsten Koch, Daniel Rehfeldt, and Manuel Wetzel. NIC Symposium 2020 - Proceedings, M. Müller, K. Binder, and A. Trautmann (editors), pp.345-352 Paper Optimizing Large-Scale Linear Energy System Problems with Block Diagonal Structure by Using Parallel Interior-Point Methods, joint work with Thomas Breuer, Michael Bussieck, Karl-Kien Cao, Felix Cebulla, Hans Christian Gils, Ambros Gleixner, Dmitry Khabi, Thorsten Koch, Daniel Rehfeldt, and Manuel Wetzel. Operations Research Proceedings 2017, N. Kliewer, J. F. Ehmke, and R. Borndörfer (editors), pp.641-647 Paper Other publications BEAM-ME - Ein interdisziplinärer Beitrag zur Erreichung der Klimaziele, joint work with Thomas Breuer, Michael Bussieck, Karl-Kien Cao, Hans Christian Gils, Ambros Gleixner, Dmitry Khabi, Thorsten Koch, Daniel Rehfeldt, and Manuel Wetzel, OR News Nr.66, 2019 ","excerpt":"\u003cdiv class=\"container\"\u003e\n\u003cdiv class=\"row\"\u003e\n\u003cdiv class=\"col-md-3\"\u003e\n\u003cimg class=\"mb-3\" src=\"ffiand-profile-picture.jpg\" width=\"100%\"\u003e\n\u003c/div\u003e\n\u003cdiv class=\"col-md-8\"\u003e\n\u003ch2 id=\"summary-of-qualifications\"\u003eSummary of Qualifications\u003c/h2\u003e\n\u003cp\u003eFrederik Fiand has a diploma degree in Financial Mathematics and Mathematical Economics from the Technical University of Braunschweig, Germany, where he also held a position as research associate at the Institute for Mathematical Optimization from 2013 to 2015.\nSince 2016 he works at GAMS Software GmbH where he engages with users in technical support, delivers cutting-edge optimization technology to clients in customer optimization projects, and coordinates GAMS\u0026rsquo; activities in several interdisciplinary research projects.\u003c/p\u003e","ref":"/team/ffiand/","title":"Fred"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/fproske/","title":"Freddy"},{"body":"System Overview GAMS is a high level modeling system for mathematical programming and optimization. It consists of a language compiler and a range of associated solvers.\nThe GAMS modeling language allows modelers to quickly translate real world optimization problems into computer code. The gams language compiler then translates this code into a format the solvers can understand and solve. This architecture provides great flexibility, by allowing changing the solvers used without changing the model formulation.\nGAMS Free Solvers and Links included in the GAMS Base distribution\nOpen Source (COIN-OR): CBC , Ipopt , SHOT CONVERT , JAMS and LOGMIP , NLPEC MILES EXAMINER , GAMSCHK Academic licenses only: ODHeuristic (requires a GAMS/CPLEX or GAMS/CPLEX-Link license), SCIP , Soplex GAMS/KESTREL for using the NEOS Server with a local GAMS system Additional solvers can be purchased through us.\nThe GAMS Language at a Glance The GAMS language provides a natural way to describe your model. This is best highlighted with a commonly used simple example by Dantzig (1963):\nThe goal is minimization of the cost of shipping goods from two plants to three markets, subject to supply and demand constraints.\nIndices\n$i = $plants\n$j = $markets\nGiven data\n$a_{i} = $supply of commodity of plant $i$ (cases)\n$b_{j} = $demand for commodity at market $j$ (cases)\n$d_{ij} = $distance between plant $i$ and market $j$ (thousand miles)\n$c_{ij} = F \\times d_{ij}$ shipping cost per unit shipment between plant $i$ and market $j$ (dollars per case per thousand miles)\nPlants ↓ New-York Chicago Topeka ← Markets Seattle 2.5 1.7 1.8 350 San-Diego 2.5 1.8 1.4 600 Demand → 325 300 275 ↑ supply $F=$ $ per thousand miles\nDecision variables\n$x_{ij} = $amount of commodity to ship from plant $i$ to market $j$ (cases), where $x_{ij} \\ge 0$, for all $i,j$\nConstraints\nObserve supply limit at plant $i: \\sum_{j}{x_{ij}} \\le a_{i}$ for all $i$ (cases)\nSatisfy demand at market $j: \\sum_{i}{x_{ij}} \\ge b_{j}$ for all $j$ (cases)\nObjective Function\nMinimize $\\sum_{i}\\sum_{j}c_{ij}x_{ij}$ ($K)\nThe GAMS Model The above can easiliy be formulated using the GAMS language. The use of concise algebraic descriptions makes the model highly compact, with a logical structure. Internal documentation, such as explanation of parameters and units of measurement, makes the model easy to read.\nSets i canning plants / Seattle, San-Diego / j markets / New-York, Chicago, Topeka / ; Parameters a(i) capacity of plant i in cases / Seattle 350 San-Diego 600 / b(j) demand at market j in cases / New-York 325 Chicago 300 Topeka 275 / ; Table d(i,j) distance in thousands of miles New-York Chicago Topeka Seattle 2.5 1.7 1.8 San-Diego 2.5 1.8 1.4 ; Scalar f freight in dollars per case per thousand miles /90/ ; Parameter c(i,j) transport cost in thousands of dollars per case ; c(i,j) = f * d(i,j) / 1000 ; Variables x(i,j) shipment quantities in cases z total transportation costs in thousands of dollars ; Positive variables x ; Equations cost define objective function supply(i) observe supply limit at plant i demand(j) satisfy demand at market j ; cost .. z =e= sum((i,j), c(i,j)*x(i,j)) ; supply(i) .. sum(j, x(i,j)) =l= a(i) ; demand(j) .. sum(i, x(i,j)) =g= b(j) ; Model transport /all/ ; Solve transport using LP minimizing z ; This short code-listing demonstrates the most important syntax features of the GAMS language. Below we will go through the individual statements one by one:\nSets Sets are the basic building blocks of a GAMS model, corresponding exactly to the indices in the algebraic representations of models. The Transportation example above contains just one Set statement:\nSets i canning plants / Seattle, San-Diego / j markets / New-York, Chicago, Topeka / ; The effect of this statement is probably self-evident. We declared two sets and gave them the names $i$ and $j$. We also assigned members to the sets as follows:\n$i =$ {Seattle, San-Diego}\n$j =$ {New-York, Chicago, Topeka}.\nNote the use of forward slashes (\u0026quot;/\u0026quot;) for surrounding the list of set members. In mathematical notation this would be done with curly braces instead.\nParameters Parameters are one way of entering data in GAMS. In this case, the parameters $a$ and $b$ are defined over the sets $i$ and $j$.\nParameters a(i) capacity of plant i in cases / Seattle 350 San-Diego 600 / b(j) demand at market j in cases / New-York 325 Chicago 300 Topeka 275 / ; GAMS lets you place explanatory text (shown in lower case) throughout your model, as you develop it. Your comments are automatically incorporated into the output report, at the appropriate places.\nTable Data can also be entered in convenient table form. GAMS lets you input data in their basic form - transformations are specified algebraically.\nTable d(i,j) distance in thousands of miles New-York Chicago Topeka Seattle 2.5 1.7 1.8 San-Diego 2.5 1.8 1.4 ; Scalar Constants can simply be declared as scalars:\nScalar f freight in dollars per case per thousand miles /90/ ; Data Manipulation When data values are to be calculated, you first declare the parameter (i.e. give it a symbol and, optionally, index it), then give its algebraic formulation. GAMS will automatically make the calculations.\nParameter c(i,j) transport cost in thousands of dollars per case ; c(i,j) = f * d(i,j) / 1000 ; Variables Decision variables are expressed algebraically, with their indices specified. From this general form, GAMS generates each instance of the variable in the domain. Variables are specified as to type: FREE, POSITIVE, NEGATIVE, BINARY, or INTEGER. The default is FREE. The objective variable (z, here) is simply declared without an index.\nVariables x(i,j) shipment quantities in cases z total transportation costs in thousands of dollars ; Positive variables x ; Equations Objective function and constraint equations are first declared by giving them names. Then their general algebraic formulae are described. GAMS now has enough information (from data entered above and from the algebraic relationships specified in the equations) to automatically generate each individual constraint statement - as you can see in the output report below. An extensive set of tools enables you to model any expression that can be stated algebraically: arithmetic, indexing, functions and exception-handling log (e.g. if-then-else and such-that constructs).\n=E= indicates \u0026rsquo;equal to\u0026rsquo;\n=L= indicates \u0026rsquo;less than or equal to\u0026rsquo;\n=G= indicates \u0026lsquo;greater than or equal to\u0026rsquo;\nEquations cost define objective function supply(i) observe supply limit at plant i demand(j) satisfy demand at market j ; cost .. z =e= sum((i,j), c(i,j)*x(i,j)) ; supply(i) .. sum(j, x(i,j)) =l= a(i) ; demand(j) .. sum(i, x(i,j)) =g= b(j) ; Model statement The model is given a unique name (here, TRANSPORT), and the modeler specifies which equations should be included in this particular formulation. In this case we specified ALL which indicates that all equations are part of the model. This would be equivalent to MODEL TRANSPORT /COST, SUPPLY, DEMAND/ . This equation selection enables you to formulate different models within a single GAMS input file, based on the same or different given data.\nModel transport /all/ ; Solve statement The solve statement (1) tells GAMS which model to solve, (2) selects the solver to use (in this case an LP solver), (3) indicates the direction of the optimization, either MINIMIZING or MAXIMIZING , and (4) specifies the objective variable.\nSolve transport using LP minimizing z ; ","excerpt":"\u003ch1 id=\"system-overview\"\u003eSystem Overview\u003c/h1\u003e\n\u003cp\u003eGAMS is a high level modeling system for mathematical programming and optimization. It consists of a language compiler and a range of associated solvers.\u003c/p\u003e\n\u003cp\u003eThe GAMS modeling language allows modelers to quickly translate real world optimization problems into computer code. The gams language compiler then translates this code into a format the solvers can understand and solve. This architecture provides great flexibility, by allowing changing the solvers used without changing the model formulation.\u003c/p\u003e","ref":"/products/gams/gams-language/","title":"GAMS"},{"body":" Download GAMS Release 48.7.0 Released September 16, 2025 Please consult the release notes before downloading a system. We also have detailed platform descriptions and installation notes. The GAMS distribution includes the documentation in electronic form. A free demo license, valid for 5 months and with size restrictions, is included with every GAMS distribution. MS Windows Desktop and Server Operating Systems1 x86_64 architecture MD5 hash2 12335d2146a03d02e1cf470d6fd16637 Download GNU/Linux Systems on AMD/Intel CPUs x86_64 architecture MD5 hash2 a7e106c8b487eba49a48291a4d0bb150 Download Package Installer for macOS on Intel CPUs3 x86_64 architecture MD5 hash2 b0f84c17c6fcf3c6233494857f8a8d35 Download Package Installer for macOS on Apple M series CPUs3 arm64 architecture MD5 hash2 e2c1c6ab9f465cda072aa59f5fe6218c Download License Agreement \u0026times; By downloading our software you agree to our license agreement. Close Download (1)The SmartScreen Filter on Microsoft Windows might give a warning during the installation. For more information please check our Documentation. (2)Use a program like md5sum to verify. This should come preinstalled on most Linux systems. On Windows systems, open a powershell and enter Get-FileHash .\\windows_x64_64.exe -Algorithm MD5. On macOS systems, you can use md5 in the terminal. (3)For macOS, also a simple self-extracting archive is available, which you can download here (x86_64) and here (arm64). Note, that this archive does not contain GAMS Studio. The md5 hash for this download is f42e0fa2d3c4e1399582b36f5b64774e (x86_64) and 5a2f85bc1eda2b082810fac106d10d63 (arm64), respectively. To deliver GAMS with the best performance we are using the Amazon CloudFront web service, a global network of edge locations for content delivery. Stay in Touch Sign up for Our Newsletters To stay informed about new GAMS releases and features, and read highlights from our blog, sign up for our newsletter! Sign Up ","excerpt":"\u003cstyle type=\"text/css\"\u003e#formRequest .form-control { color: #000000; }\u003c/style\u003e\n\n\n\u003cdiv class=\"wrapper\"\u003e\n \u003cdiv id=\"GAMS_site_content\" class=\"gams-site\"\u003e\n\n \u003csection\u003e\n \u003ca name=\"c722\" id=\"c722\"\u003e\u003c/a\u003e\n \n \u003cdiv class=\"full-width\"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid\"\u003e\n \u003cdiv class=\"container\"\u003e\n \u003ch1 id=\"GAMS_release_header\"\u003eDownload GAMS Release 48.7.0 \u003c/h2\u003e\n \u003ch3\u003eReleased September 16, 2025\u003c/h3\u003e\n \n \u003cdiv class=\"bg-warning\"\u003e\n \n \u003c/div\u003e\n \n \u003cp class=\"lead\"\u003e\n Please consult the \u003ca href=\"/48/docs/RN_48.html\" title=\"Release Notes\" target=\"_blank\"\n class=\"external-link-new-window\"\u003erelease notes\u003c/a\u003e before downloading a system.\n We also have \u003ca href=\"/48/docs/UG_PLATFORMS.html\" title=\"Detailed platform descriptions\" target=\"_blank\"\n class=\"external-link-new-window\"\u003edetailed platform descriptions\u003c/a\u003e\n and \u003ca href=\"/48/docs/UG_MAIN.html#UG_INSTALL\" title=\"Installation notes\" target=\"_blank\"\n class=\"external-link-new-window\"\u003einstallation notes\u003c/a\u003e.\n The GAMS distribution includes the \u003ca href=\"/48/docs/\" title=\"Documentation\"\n target=\"_blank\"\u003edocumentation\u003c/a\u003e in electronic form.\n A free demo license, valid for 5 months and with \u003ca href=\"/latest/docs/UG_License.html#UG_License_Additional_Solver_Limits\"\u003esize restrictions\u003c/a\u003e, is included with every GAMS distribution. \n \u003c/p\u003e","ref":"/downloadpage/48/","title":"GAMS - Download"},{"body":" Download GAMS Release 49.7.0 Released September 17, 2025 Please consult the release notes before downloading a system. We also have detailed platform descriptions and installation notes. The GAMS distribution includes the documentation in electronic form. A free demo license, valid for 5 months and with size restrictions, is included with every GAMS distribution. MS Windows Desktop and Server Operating Systems1 x86_64 architecture MD5 hash2 1f85d5692e6c16ea4a4a019e0f47b30b Download GNU/Linux Systems on AMD/Intel CPUs x86_64 architecture MD5 hash2 5a1d593d13d439eb8ceb715f19455b72 Download GNU/Linux Systems on ARM64 CPUs aarch64 architecture MD5 hash2 7d14f28c6139dcee02c61abb3b9ed947 Download Package Installer for macOS on Intel CPUs3 x86_64 architecture MD5 hash2 3947792ba25716c6183ecc52e6ea5c36 Download Package Installer for macOS on Apple M series CPUs3 arm64 architecture MD5 hash2 b008cbafdd0a38da23664ddcaa0a40b1 Download License Agreement \u0026times; By downloading our software you agree to our license agreement. Close Download (1)The SmartScreen Filter on Microsoft Windows might give a warning during the installation. For more information please check our Documentation. (2)Use a program like md5sum to verify. This should come preinstalled on most Linux systems. On Windows systems, open a powershell and enter Get-FileHash .\\windows_x64_64.exe -Algorithm MD5. On macOS systems, you can use md5 in the terminal. (3)For macOS, also a simple self-extracting archive is available, which you can download here (x86_64) and here (arm64). Note, that this archive does not contain GAMS Studio. The md5 hash for this download is fa5f1f4fcd119613b7784f6956d8adaf (x86_64) and 83e57193cb7eef215849290a8fc2a45f (arm64), respectively. To deliver GAMS with the best performance we are using the Amazon CloudFront web service, a global network of edge locations for content delivery. Previous Distributions Download older GAMS versions below GAMS 48, released October 14, 2024 GAMS 47, released June 13, 2024 Stay in Touch Sign up for Our Newsletters To stay informed about new GAMS releases and features, and read highlights from our blog, sign up for our newsletter! Sign Up ","excerpt":"\u003cstyle type=\"text/css\"\u003e#formRequest .form-control { color: #000000; }\u003c/style\u003e\n\n\n\u003cdiv class=\"wrapper\"\u003e\n \u003cdiv id=\"GAMS_site_content\" class=\"gams-site\"\u003e\n\n \u003csection\u003e\n \u003ca name=\"c722\" id=\"c722\"\u003e\u003c/a\u003e\n \n \u003cdiv class=\"full-width\"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid\"\u003e\n \u003cdiv class=\"container\"\u003e\n \u003ch1 id=\"GAMS_release_header\"\u003eDownload GAMS Release 49.7.0 \u003c/h2\u003e\n \u003ch3\u003eReleased September 17, 2025\u003c/h3\u003e\n \n \u003cdiv class=\"bg-warning\"\u003e\n \n \u003c/div\u003e\n \n \u003cp class=\"lead\"\u003e\n Please consult the \u003ca href=\"/49/docs/RN_49.html\" title=\"Release Notes\" target=\"_blank\"\n class=\"external-link-new-window\"\u003erelease notes\u003c/a\u003e before downloading a system.\n We also have \u003ca href=\"/49/docs/UG_PLATFORMS.html\" title=\"Detailed platform descriptions\" target=\"_blank\"\n class=\"external-link-new-window\"\u003edetailed platform descriptions\u003c/a\u003e\n and \u003ca href=\"/49/docs/UG_MAIN.html#UG_INSTALL\" title=\"Installation notes\" target=\"_blank\"\n class=\"external-link-new-window\"\u003einstallation notes\u003c/a\u003e.\n The GAMS distribution includes the \u003ca href=\"/49/docs/\" title=\"Documentation\"\n target=\"_blank\"\u003edocumentation\u003c/a\u003e in electronic form.\n A free demo license, valid for 5 months and with \u003ca href=\"/latest/docs/UG_License.html#UG_License_Additional_Solver_Limits\"\u003esize restrictions\u003c/a\u003e, is included with every GAMS distribution. \n \u003c/p\u003e","ref":"/downloadpage/49/","title":"GAMS - Download"},{"body":" Download GAMS Release 50.5.0 Released September 18, 2025 Please consult the release notes before downloading a system. We also have detailed platform descriptions and installation notes. The GAMS distribution includes the documentation in electronic form. A free demo license, valid for 5 months and with size restrictions, is included with every GAMS distribution. MS Windows Desktop and Server Operating Systems1 x86_64 architecture MD5 hash2 135d668bf85af31bf3a2deda68ae6cb8 Download GNU/Linux Systems on AMD/Intel CPUs x86_64 architecture MD5 hash2 e2e7dd5f78c34bd3e1b589cc071890fe Download GNU/Linux Systems on ARM64 CPUs aarch64 architecture MD5 hash2 817692456edf850081e494718433c786 Download Package Installer for macOS on Intel CPUs3 x86_64 architecture MD5 hash2 0b7f3448d90d29b7f0a4f7d39b5a69d9 Download Package Installer for macOS on Apple M series CPUs3 arm64 architecture MD5 hash2 55d85da7942adeb19c5dff45d9f705f3 Download License Agreement \u0026times; By downloading our software you agree to our license agreement. Close Download (1)The SmartScreen Filter on Microsoft Windows might give a warning during the installation. For more information please check our Documentation. (2)Use a program like md5sum to verify. This should come preinstalled on most Linux systems. On Windows systems, open a powershell and enter Get-FileHash .\\windows_x64_64.exe -Algorithm MD5. On macOS systems, you can use md5 in the terminal. (3)For macOS, also a simple self-extracting archive is available, which you can download here (x86_64) and here (arm64). Note, that this archive does not contain GAMS Studio. The md5 hash for this download is ee25d56788d0141a94b69583b34b6a8e (x86_64) and 8eec38f0bf1098ba1b7f1006a97b1fc9 (arm64), respectively. To deliver GAMS with the best performance we are using the Amazon CloudFront web service, a global network of edge locations for content delivery. Previous Distributions Download older GAMS versions below GAMS 49, released February 15, 2025 GAMS 48, released October 14, 2024 Stay in Touch Sign up for Our Newsletters To stay informed about new GAMS releases and features, and read highlights from our blog, sign up for our newsletter! Sign Up ","excerpt":"\u003cstyle type=\"text/css\"\u003e#formRequest .form-control { color: #000000; }\u003c/style\u003e\n\n\n\u003cdiv class=\"wrapper\"\u003e\n \u003cdiv id=\"GAMS_site_content\" class=\"gams-site\"\u003e\n\n \u003csection\u003e\n \u003ca name=\"c722\" id=\"c722\"\u003e\u003c/a\u003e\n \n \u003cdiv class=\"full-width\"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid\"\u003e\n \u003cdiv class=\"container\"\u003e\n \u003ch1 id=\"GAMS_release_header\"\u003eDownload GAMS Release 50.5.0 \u003c/h2\u003e\n \u003ch3\u003eReleased September 18, 2025\u003c/h3\u003e\n \n \u003cdiv class=\"bg-warning\"\u003e\n \n \u003c/div\u003e\n \n \u003cp class=\"lead\"\u003e\n Please consult the \u003ca href=\"/50/docs/RN_50.html\" title=\"Release Notes\" target=\"_blank\"\n class=\"external-link-new-window\"\u003erelease notes\u003c/a\u003e before downloading a system.\n We also have \u003ca href=\"/50/docs/UG_PLATFORMS.html\" title=\"Detailed platform descriptions\" target=\"_blank\"\n class=\"external-link-new-window\"\u003edetailed platform descriptions\u003c/a\u003e\n and \u003ca href=\"/50/docs/UG_MAIN.html#UG_INSTALL\" title=\"Installation notes\" target=\"_blank\"\n class=\"external-link-new-window\"\u003einstallation notes\u003c/a\u003e.\n The GAMS distribution includes the \u003ca href=\"/50/docs/\" title=\"Documentation\"\n target=\"_blank\"\u003edocumentation\u003c/a\u003e in electronic form.\n A free demo license, valid for 5 months and with \u003ca href=\"/latest/docs/UG_License.html#UG_License_Additional_Solver_Limits\"\u003esize restrictions\u003c/a\u003e, is included with every GAMS distribution. \n \u003c/p\u003e","ref":"/downloadpage/50/","title":"GAMS - Download"},{"body":" Download GAMS Release 51.4.0 Released November 10, 2025 Please consult the release notes before downloading a system. We also have detailed platform descriptions and installation notes. The GAMS distribution includes the documentation in electronic form. A free demo license, valid for 5 months and with size restrictions, is included with every GAMS distribution. MS Windows Desktop and Server Operating Systems1 x86_64 architecture MD5 hash2 414ce858bc7a378291780ec3ae778cc7 Download GNU/Linux Systems on AMD/Intel CPUs x86_64 architecture MD5 hash2 4d70f83bccfbf1555616770a67dc9e29 Download GNU/Linux Systems on ARM64 CPUs aarch64 architecture MD5 hash2 a3c6d161966a46f030c5aecaf325ee3e Download Package Installer for macOS on Intel CPUs3 x86_64 architecture MD5 hash2 6e2de30c8aa7a00d31c47f5644467094 Download Package Installer for macOS on Apple M series CPUs3 arm64 architecture MD5 hash2 9a827ea0d3c9fe8f46827dafd101b3a2 Download License Agreement \u0026times; By downloading our software you agree to our license agreement. Close Download (1)The SmartScreen Filter on Microsoft Windows might give a warning during the installation. For more information please check our Documentation. (2)Use a program like md5sum to verify. This should come preinstalled on most Linux systems. On Windows systems, open a powershell and enter Get-FileHash .\\windows_x64_64.exe -Algorithm MD5. On macOS systems, you can use md5 in the terminal. (3)For macOS, also a simple self-extracting archive is available, which you can download here (x86_64) and here (arm64). Note, that this archive does not contain GAMS Studio. The md5 hash for this download is 9b308530b69c108623d5fd0c15894d7a (x86_64) and 5f1a52ce0abf42c23ad1fa057562136a (arm64), respectively. To deliver GAMS with the best performance we are using the Amazon CloudFront web service, a global network of edge locations for content delivery. Previous Distributions Download older GAMS versions below GAMS 50, released June 18, 2025 GAMS 49, released February 15, 2025 GAMS 48, released October 14, 2024 Stay in Touch Sign up for Our Newsletters To stay informed about new GAMS releases and features, and read highlights from our blog, sign up for our newsletter! Sign Up ","excerpt":"\u003cstyle type=\"text/css\"\u003e#formRequest .form-control { color: #000000; }\u003c/style\u003e\n\n\n\u003cdiv class=\"wrapper\"\u003e\n \u003cdiv id=\"GAMS_site_content\" class=\"gams-site\"\u003e\n\n \u003csection\u003e\n \u003ca name=\"c722\" id=\"c722\"\u003e\u003c/a\u003e\n \n \u003cdiv class=\"full-width\"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid\"\u003e\n \u003cdiv class=\"container\"\u003e\n \u003ch1 id=\"GAMS_release_header\"\u003eDownload GAMS Release 51.4.0 \u003c/h2\u003e\n \u003ch3\u003eReleased November 10, 2025\u003c/h3\u003e\n \n \u003cdiv class=\"bg-warning\"\u003e\n \n \u003c/div\u003e\n \n \u003cp class=\"lead\"\u003e\n Please consult the \u003ca href=\"/51/docs/RN_51.html\" title=\"Release Notes\" target=\"_blank\"\n class=\"external-link-new-window\"\u003erelease notes\u003c/a\u003e before downloading a system.\n We also have \u003ca href=\"/51/docs/UG_PLATFORMS.html\" title=\"Detailed platform descriptions\" target=\"_blank\"\n class=\"external-link-new-window\"\u003edetailed platform descriptions\u003c/a\u003e\n and \u003ca href=\"/51/docs/UG_MAIN.html#UG_INSTALL\" title=\"Installation notes\" target=\"_blank\"\n class=\"external-link-new-window\"\u003einstallation notes\u003c/a\u003e.\n The GAMS distribution includes the \u003ca href=\"/51/docs/\" title=\"Documentation\"\n target=\"_blank\"\u003edocumentation\u003c/a\u003e in electronic form.\n A free demo license, valid for 5 months and with \u003ca href=\"/latest/docs/UG_License.html#UG_License_Additional_Solver_Limits\"\u003esize restrictions\u003c/a\u003e, is included with every GAMS distribution. \n \u003c/p\u003e","ref":"/downloadpage/51/","title":"GAMS - Download"},{"body":" Download GAMS Release 52.0.0 BETA Released November 12, 2025 This is a BETA version of the software and not the final product. Use it at your own risk.\nPlease consult the release notes before downloading a system. We also have detailed platform descriptions and installation notes. The GAMS distribution includes the documentation in electronic form. A free demo license, valid for 5 months and with size restrictions, is included with every GAMS distribution. MS Windows Desktop and Server Operating Systems1 x86_64 architecture MD5 hash2 d4e692c1564372aae1ff0b96fc8a58ab Download GNU/Linux Systems on AMD/Intel CPUs x86_64 architecture MD5 hash2 13869cb2348e712eaef475c08a918981 Download GNU/Linux Systems on ARM64 CPUs aarch64 architecture MD5 hash2 2275bff6bceaf5b8c680c0f7805ec17e Download Package Installer for macOS on Intel CPUs3 x86_64 architecture MD5 hash2 ed070d3a5437ecd211423e7d51f7696e Download Package Installer for macOS on Apple M series CPUs3 arm64 architecture MD5 hash2 49c6ac5c263c958fc2c29a23a82bfeb1 Download License Agreement \u0026times; By downloading our software you agree to our license agreement. Close Download (1)The SmartScreen Filter on Microsoft Windows might give a warning during the installation. For more information please check our Documentation. (2)Use a program like md5sum to verify. This should come preinstalled on most Linux systems. On Windows systems, open a powershell and enter Get-FileHash .\\windows_x64_64.exe -Algorithm MD5. On macOS systems, you can use md5 in the terminal. To deliver GAMS with the best performance we are using the Amazon CloudFront web service, a global network of edge locations for content delivery. Previous Distributions Download older GAMS versions below GAMS 51, released September 13, 2025 GAMS 50, released June 18, 2025 GAMS 49, released February 15, 2025 Stay in Touch Sign up for Our Newsletters To stay informed about new GAMS releases and features, and read highlights from our blog, sign up for our newsletter! Sign Up ","excerpt":"\u003cstyle type=\"text/css\"\u003e#formRequest .form-control { color: #000000; }\u003c/style\u003e\n\n\n\u003cdiv class=\"wrapper\"\u003e\n \u003cdiv id=\"GAMS_site_content\" class=\"gams-site\"\u003e\n\n \u003csection\u003e\n \u003ca name=\"c722\" id=\"c722\"\u003e\u003c/a\u003e\n \n \u003cdiv class=\"full-width\"\u003e\n \u003cdiv class=\"jumbotron jumbotron-fluid\"\u003e\n \u003cdiv class=\"container\"\u003e\n \u003ch1 id=\"GAMS_release_header\"\u003eDownload GAMS Release 52.0.0 BETA \u003c/h2\u003e\n \u003ch3\u003eReleased November 12, 2025\u003c/h3\u003e\n \n \u003cdiv class=\"bg-warning\"\u003e\n \u003cp\u003eThis is a \u003cb\u003eBETA version\u003c/b\u003e of the software and not the final product. Use it at your own risk.\u003c/p\u003e \n \u003c/div\u003e\n \n \u003cp class=\"lead\"\u003e\n Please consult the \u003ca href=\"/52/docs/RN_52.html\" title=\"Release Notes\" target=\"_blank\"\n class=\"external-link-new-window\"\u003erelease notes\u003c/a\u003e before downloading a system.\n We also have \u003ca href=\"/52/docs/UG_PLATFORMS.html\" title=\"Detailed platform descriptions\" target=\"_blank\"\n class=\"external-link-new-window\"\u003edetailed platform descriptions\u003c/a\u003e\n and \u003ca href=\"/52/docs/UG_MAIN.html#UG_INSTALL\" title=\"Installation notes\" target=\"_blank\"\n class=\"external-link-new-window\"\u003einstallation notes\u003c/a\u003e.\n The GAMS distribution includes the \u003ca href=\"/52/docs/\" title=\"Documentation\"\n target=\"_blank\"\u003edocumentation\u003c/a\u003e in electronic form.\n A free demo license, valid for 5 months and with \u003ca href=\"/latest/docs/UG_License.html#UG_License_Additional_Solver_Limits\"\u003esize restrictions\u003c/a\u003e, is included with every GAMS distribution. \n \u003c/p\u003e","ref":"/downloadpage/52/","title":"GAMS - Download"},{"body":"\nGAMS Advertisements Deploy your GAMS models as interactive web application ; OR/MS-today Advertisement, April 2020 GAMS General Ad 2019 ; OR-News Advertisement, March 2019 A GAMS Application in the petroleum industry ; OR/MS-today Advertisement, October 2018 GAMS Studio ; OR/MS-today Advertisement, June 2018 GAMS-related Courses and Workshops ; OR/MS-today Advertisement, April 2018 The GAMS Community ; OR/MS-today Advertisement, February 2018 Rescheduling Exams at USMA with GAMS ; OR/MS-today Advertisement, December 2017 Object-oriented GAMS Application Programming Interfaces ; OR/MS-today Advertisement, August 2017 GAMS-related Courses and Workshops ; OR/MS-today Advertisement, June 2017 Reasons for GAMS ; OR/MS-today Advertisement, April 2017 SmartEnergyHub ; OR/MS-today Advertisement, February 2017 Effects of Proposed Trade Policies on Employment; OR/MS-today Advertisement, December 2016 Object-Oriented GAMS Application Programming Interfaces; OR/MS-today Advertisement, August 2016 Optimizing Carbon Capture Technologies: The CCSI Optimization Toolset; OR/MS-today Advertisement, June 2016 GAMS-related Courses and Workshops in 2016; OR/MS-today Advertisement, April 2016 Optimizing to combat Climate Change: CO2 Capture, Utilization, Transport, and Storage; OR/MS-today Advertisement, February 2016 GGIG - A GAMS Graphical Interface Generator; OR/MS-today Advertisement, December 2015 Long-term Energy Scenarios in the US with TIMES FACETS; OR/MS-today Advertisement, October 2015 Efficiency Benchmarking for the Finnish Energy Authority; OR/MS-today Advertisement, August 2015 PET - Energy Investment Modeling in Chile; OR/MS-today Advertisement, June 2015 GAMS-related Courses and Workshops; OR/MS-today Advertisement, April 2015 CyBio Scheduler - Scheduling Software for High Throughput Screening; OR/MS-today Advertisement, February 2015 West Point Academy Scheduler; OR/MS-today Advertisement, December 2014 Tommasino-Rao Input Output Balance Software (TRIOBAL); OR/MS-today Advertisement, October 2014 Fields of Fuel - A Multiplayer, Web-based Simulation Game; OR/MS-today Advertisement, August 2014 GAMS-related Courses and Workshops; OR/MS-today Advertisement, June 2014 A Water Management Decision Support System (DSS) for the Indus Basin; OR/MS-today Advertisement, April 2014 Object-Oriented GAMS Application Programming Interfaces; OR/MS-today Advertisement, January 2014 25 years GAMS Development Corporation since 1988; OR/MS-today Advertisement, December 2013 PAVER 2: The next generation of the GAMS Performance Tools; OR/MS-today Advertisement, October 2013 Object-Oriented GAMS Application Programming Interfaces; OR/MS-today Advertisement, August 2013 MINLP and Global Solvers in GAMS; OR/MS-today Advertisement, June 2013 INTEGRATION by CanmetENERGY; OR/MS-today Advertisement, April 2013 Transport Logistics at BASF; OR/MS-today Advertisement, February 2013 Interfacing GAMS with MATLAB\u0026#169; and R; OR/MS-today Advertisement, December 2012 DIMENSION - A Dispatch and Investment Model for European Electricity Markets; OR/MS-today Advertisement, October 2012 Day-Ahead Scheduling (DAS) Solver; OR/MS-today Advertisement, August 2012 The Network Enabled Optimization System (NEOS); OR/MS-today Advertisement, June 2012 MINLP and Global Solvers in GAMS; OR/MS-today Advertisement, April 2012 GAMS/EMP - An Extended Mathematical Programming Framework; OR/MS-today Advertisement, February 2012 -FinE Analytics - An advanced, flexible and light weight financial valuation and risk management framework. ; OR/MS-today Advertisement, December 2011 -Cutting Stock Optimization at GSE ; OR/MS-today Advertisement, October 2011 IMPACT - Modeling the Effects of Climate Change and Water Availability on Food Security ;OR/MS-today Advertisement, August 2011 HABITAT - A reserve selection tool for European wetland biodiversity conservation; OR/MS-today Advertisement, June 2011 ReMIND-R - A global energy economy climate model in a multi-regional setting; OR/MS-today Advertisement, April 2011 Resident Rotation Scheduling at the University of Wisconsin Madison Surgery Department; OR/MS-today Advertisement, February 2011 University course time tabling at the School of Economics and Management at Leibniz University Hannover; OR/MS-today Advertisement, December 2010 Integer Optimization for Identification of Drug Effects; OR/MS-today Advertisement, October 2010 ENERGY OPTIMA 2000; OR/MS-today Advertisement, August 2010 FACETS - An evolving Framework for Analysis of Climate-Energy-Technology Solutions; OR/MS-today Advertisement, June 2010 PROPHET Solutions \u0026ndash; RPS; OR/MS-today Advertisement, April 2010 Scheduling and Planning at BASF; OR/MS-today Advertisement, February 2010 GAMS available on the Amazon Elastic Compute Cloud; OR/MS-today Advertisement, December 2009 Granular Energy Forecasting Models; OR/MS-today Advertisement, October 2009 Decision Support Systems for the Energy Sector (SADSE); OR/MS-today Advertisement, August 2009 ERS/USDA China Agricultural Regional Model; OR/MS-today Advertisement, June 2009 AGMEMOD \u0026ndash; Agri-food projections for EU member states; OR/MS-today Advertisement, April 2009 Optimal transmission switching; OR/MS-today Advertisement, February 2009 BALMOREL; OR/MS-today Advertisement, December 2008 GAMS/SCENRED-2; OR/MS-today Advertisement, October 2008 The CAPRI (Common Agricultural Policy Regional Impact) Modeling System; OR/MS-today Advertisement, August 2008 FINLIB; OR/MS-today Advertisement, June 2008 GAMS/SCIP; OR/MS-today Advertisement, April 2008 GAMS/COIN-OR; OR/MS-today Advertisement, February 2008 GAMS 22.6 on Three New Platforms; OR/MS-today Advertisement, December 2007 Framework for Novel Mathematical Programming Reformulations; OR/MS-today Advertisement, October 2007 High Performance Computing: GAMS on Network.com; OR/MS-today Advertisement, August 2007 MINLP and Global solvers in GAMS; OR/MS-today Advertisement, June 2007 SCAplanner Interacting With GAMS; OR/MS-today Advertisement, February 2007 ProCom Optimization Suite; OR/MS-today Advertisement, December 2006 DemandTec Leverages GAMS to Drive Innovation in Retail and CPG Industries; OR/MS-today Advertisement, October 2006 Optimizing Machine Motion Using GAMS; OR/MS-today Advertisement, August 2006 Climate Policy Modeling with GAMS; OR/MS-today Advertisement, June 2006 Optience Core Application Builder; OR/MS-today Advertisement, April 2006 GAMS, Condor and the Grid; OR/MS-today Advertisement, February 2006 Global Public Policy Modeling; OR/MS-today Advertisement, December 2005 Charting Engine in GAMS; OR/MS-today Advertisement, October 2005 Grid Computing with GAMS; OR/MS-today Advertisement, August 2005 Windows 64 Support; OR/MS-today Advertisement, June 2005 Linux 64 and Macintosh PowerPC Support; OR/MS-today Advertisement, April 2005 Application Deployment; OR/MS-today Advertisement, February 2005 GAMS/NLP Solvers; OR/MS-today Advertisement, December 2004 Modeling for the Real World: GAMS/Solvers; OR/MS-today Advertisement, October 2004 GAMS/COIN; OR/MS-today Advertisement, August 2004 Quality Assurance; OR/MS-today Advertisement, June 2004 Branch-and-Cut with Heuristics; OR/MS-today Advertisement, April 2004 Quadratically Constrained Programs; OR/MS-today Advertisement, February 2004 Stochastic Programming; OR/MS-today Advertisement, December 2003 Global Solvers; OR/MS-today Advertisement, October 2003 Modeling for the Real World; OR/MS-today Advertisement, August 2003 Conic Programming in GAMS; OR/MS-today Advertisement, June 2003 LGO Global Optimization Solver; OR/MS-today Advertisement, April 2003 MPEC Solver NLPEC; OR/MS-today Advertisement, February 2003 Multi-Start Solver OQNLP; OR/MS-today Advertisement, December 2002 Global Optimization Solver BARON; OR/MS-today Advertisement, October 2002 Multi-method Solver CONOPT; OR/MS-today Advertisement, August 2002 Model Types, Solvers, Platforms; OR/MS-today Advertisement, June 2002 GAMS World: MPSGE World; OR/MS-today Advertisement, April 2002 GAMS World: Performance World; OR/MS-today Advertisement, February 2002 GAMS World: GAMS Translation Service; OR/MS-today Advertisement, December 2001 GAMS World: MPEC World; OR/MS-today Advertisement, October 2001 GAMS World: GLOBAL World; OR/MS-today Advertisement, August 2001 GAMS World: MINLP World; OR/MS-today Advertisement, June 2001 Enterprise Academy Management System/Scheduler; OR/MS-today Advertisement, April 2001 SAT Prophet; OR/MS-today Advertisement, February 2001 MESAP/PROFAKO; OR/MS-today Advertisement, December 2000 XOPT; OR/MS-today Advertisement, October 2000 WaterTarget; OR/MS-today Advertisement, August 2000 Riskontroller; OR/MS-today Advertisement, June 2000 UFEM-NPM Forecasting System; OR/MS-Today Advertisement, April 2000 International Impact Assessment Model (IIAM); OR/MS-Today Advertisement, February 2000 StarBlend application; OR/MS-today Advertisement, December 1999 RiskAdvisor system; OR/MS-today Advertisement, October 1999 MARKAL-MACRO model; OR/MS-today Advertisement, August 1999 Interfacing COIN-OR Solvers by GAMS ; GAMS-Poster GAMS/BARON ; GAMS-Poster GAMS/PATHNLP ; GAMS-Poster GAMS/SBB ; GAMS-Poster GAMS-X ; Poster from Collin Starkweather, Thomas Rutherford GAMS/Matlab ; Poster from Michael C. Ferris ","excerpt":"\u003cp\u003e\u003ca name=\"Ads\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch1\u003eGAMS Advertisements\u003c/h1\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"2020_04_orms_today_MIRO.pdf\"\u003eDeploy your GAMS models as interactive web application\u003c/a\u003e\n\u003c/strong\u003e; OR/MS-today Advertisement, April 2020\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"0319_AD_General_GAMS.pdf\"\u003eGAMS General Ad 2019\u003c/a\u003e\n\u003c/strong\u003e; OR-News Advertisement, March 2019\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"orms_2018_10_enap.pdf\"\u003eA GAMS Application in the petroleum industry\u003c/a\u003e\n\u003c/strong\u003e; OR/MS-today Advertisement, October 2018\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"orms_2018_06_studio.pdf\"\u003eGAMS Studio\u003c/a\u003e\n\u003c/strong\u003e; OR/MS-today Advertisement, June 2018\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"orms_2018_04_courses.pdf\"\u003eGAMS-related Courses and Workshops\u003c/a\u003e\n\u003c/strong\u003e; OR/MS-today Advertisement, April 2018\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"orms_2018_02_community.pdf\"\u003eThe GAMS Community\u003c/a\u003e\n\u003c/strong\u003e; OR/MS-today Advertisement, February 2018\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"orms_2017_12_usma.pdf\"\u003eRescheduling Exams at USMA with GAMS\u003c/a\u003e\n\u003c/strong\u003e; OR/MS-today Advertisement, December 2017\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"GAMS_Ad_2017_08_API.pdf\"\u003eObject-oriented GAMS Application Programming Interfaces\u003c/a\u003e\n\u003c/strong\u003e; OR/MS-today Advertisement, August 2017\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"GAMS_Ad_2017_06_Courses.pdf\"\u003eGAMS-related Courses and Workshops\u003c/a\u003e\n\u003c/strong\u003e; OR/MS-today Advertisement, June 2017\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"GAMS_Ad_2017_04_ORMS.pdf\"\u003eReasons for GAMS\u003c/a\u003e\n\u003c/strong\u003e; OR/MS-today Advertisement, April 2017\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"GAMS_Ad_2016_02_SEH.pdf\"\u003eSmartEnergyHub\u003c/a\u003e\n\u003c/strong\u003e; OR/MS-today Advertisement, February 2017\u003c/li\u003e\n\u003c/ul\u003e\n\u003cul style=\"margin-top: -1rem;\"\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2016_piie.pdf\"\u003eEffects of Proposed Trade Policies on Employment\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2016\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2016_api.pdf\"\u003eObject-Oriented GAMS Application Programming Interfaces\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2016\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2016_ccsi.pdf\"\u003eOptimizing Carbon Capture Technologies: The CCSI Optimization Toolset\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2016\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2016_courses.pdf\"\u003eGAMS-related Courses and Workshops in 2016\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2016\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2016_netl.pdf\"\u003eOptimizing to combat Climate Change: CO2 Capture, Utilization, Transport, and Storage\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2016\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2015_ggig.pdf\"\u003eGGIG - A GAMS Graphical Interface Generator\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2015\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2015_times_facets.pdf\"\u003eLong-term Energy Scenarios in the US with TIMES FACETS\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2015\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2015_stoned.pdf\"\u003eEfficiency Benchmarking for the Finnish Energy Authority\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2015\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2015_pet_chile.pdf\"\u003ePET - Energy Investment Modeling in Chile\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2015\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2015_courses.pdf\"\u003eGAMS-related Courses and Workshops\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2015\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2015_cybio.pdf\"\u003eCyBio Scheduler - Scheduling Software for High Throughput Screening\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2015\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2014_westpoint.pdf\"\u003eWest Point Academy Scheduler\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2014\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms2014_triobal.pdf\"\u003eTommasino-Rao Input Output Balance Software (TRIOBAL)\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2014\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms2014_fof.pdf\"\u003eFields of Fuel - A Multiplayer, Web-based Simulation Game\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2014\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2014_courses.pdf\"\u003eGAMS-related Courses and Workshops\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2014\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2014_indus.pdf\"\u003eA Water Management Decision Support System (DSS) for the Indus Basin\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2014\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2013_api.pdf\"\u003eObject-Oriented GAMS Application Programming Interfaces\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, January 2014\n\u003cli\u003e\u003cb\u003e\u003ca href=\"or2013_25years.pdf\"\u003e25 years GAMS Development Corporation since 1988\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2013\n\u003cli\u003e\u003cb\u003e\u003ca href=\"or2013_paver2.pdf\"\u003ePAVER 2: The next generation of the GAMS Performance Tools\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2013\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2013_api.pdf\"\u003eObject-Oriented GAMS Application Programming Interfaces\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2013\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_minlpglobal_2013.pdf\"\u003eMINLP and Global Solvers in GAMS\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2013\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2013_imagination.pdf\"\u003eINTEGRATION by CanmetENERGY\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2013\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_2013_basf_train.pdf\"\u003eTransport Logistics at BASF\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2013\n\u003cli\u003e\u003cb\u003e\u003ca href=\"2012_ormstoday_qdxrrw.pdf\"\u003eInterfacing GAMS with MATLAB\u0026#169; and R\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2012\n\u003cli\u003e\u003cb\u003e\u003ca href=\"2012_ormstoday_dimension.pdf\"\u003eDIMENSION - A Dispatch and Investment Model for European Electricity Markets\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2012\n\u003cli\u003e\u003cb\u003e\u003ca href=\"2012_ormstoday_das.PDF\"\u003eDay-Ahead Scheduling (DAS) Solver\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2012\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_neos.pdf\"\u003eThe Network Enabled Optimization System (NEOS)\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2012\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_minlpglobal_2012.pdf\"\u003eMINLP and Global Solvers in GAMS\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2012\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_emp.pdf\"\u003eGAMS/EMP - An Extended Mathematical Programming Framework\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2012\n-\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_fine.pdf\"\u003eFinE Analytics - An advanced, flexible and light weight financial valuation and risk management framework. \u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2011\n-\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_gse.pdf\"\u003eCutting Stock Optimization at GSE \u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2011\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_impact.pdf\"\u003eIMPACT - Modeling the Effects of Climate Change and Water Availability on Food Security \u003c/a\u003e\u003c/b\u003e;OR/MS-today Advertisement, August 2011\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_habitat.pdf\"\u003eHABITAT - A reserve selection tool for European wetland biodiversity conservation\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2011\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_remind-r.pdf\"\u003eReMIND-R - A global energy economy climate model in a multi-regional setting\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2011\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_pgy.pdf\"\u003eResident Rotation Scheduling at the University of Wisconsin Madison Surgery Department\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2011\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_ucttl.pdf\"\u003eUniversity course time tabling at the School of Economics and Management at Leibniz University Hannover\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2010\u003c/li\u003e\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_ddesign.pdf\"\u003e Integer Optimization for Identification of Drug Effects\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2010\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_opticon.pdf\"\u003eENERGY OPTIMA 2000\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2010\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_facets.pdf\"\u003eFACETS - An evolving Framework for Analysis of Climate-Energy-Technology Solutions\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2010\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_prophet_solutions.pdf\"\u003ePROPHET Solutions \u0026ndash; RPS\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2010\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_basf.pdf\"\u003eScheduling and Planning at BASF\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2010\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_aws.pdf\"\u003eGAMS available on the Amazon Elastic Compute Cloud\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2009\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_gem.pdf\"\u003eGranular Energy Forecasting Models\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2009\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_sadse.pdf\"\u003eDecision Support Systems for the Energy Sector (SADSE)\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2009\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_carm.pdf\"\u003eERS/USDA China Agricultural Regional Model\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2009\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_agemod.pdf\"\u003eAGMEMOD \u0026ndash; Agri-food projections for EU member states\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2009\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_transmission.pdf\"\u003eOptimal transmission switching\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2009\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_balmorel.pdf\"\u003eBALMOREL\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2008\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_scenred_2.pdf\"\u003eGAMS/SCENRED-2\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2008\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_capri.pdf\"\u003eThe CAPRI (Common Agricultural Policy Regional Impact) Modeling System\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2008\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_pfo_book.pdf\"\u003eFINLIB\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2008\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_scip.pdf\"\u003eGAMS/SCIP\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2008\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_coin-or.pdf\"\u003eGAMS/COIN-OR\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2008\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_platforms.pdf\"\u003eGAMS 22.6 on Three New Platforms\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2007\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_cvar.pdf\"\u003eFramework for Novel Mathematical Programming Reformulations\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2007\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_sungrid.pdf\"\u003eHigh Performance Computing: GAMS on Network.com\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2007\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_minlpglobal.pdf\"\u003eMINLP and Global solvers in GAMS\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2007\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_sca.pdf\"\u003e SCAplanner Interacting With GAMS\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2007\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_procom.pdf\"\u003eProCom Optimization Suite\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2006\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_demandtec.pdf\"\u003eDemandTec Leverages GAMS to Drive Innovation in Retail and CPG Industries\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2006\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_convolve.pdf\"\u003eOptimizing Machine Motion Using GAMS\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2006\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_merge.pdf\"\u003eClimate Policy Modeling with GAMS\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2006\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_optience.pdf\"\u003eOptience Core Application Builder\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2006\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_condor.pdf\"\u003eGAMS, Condor and the Grid\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2006\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_pubpolicy.pdf\"\u003eGlobal Public Policy Modeling\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2005\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_chart.pdf\"\u003eCharting Engine in GAMS\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2005\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_grid.pdf\"\u003eGrid Computing with GAMS\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2005\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_win64.pdf\"\u003eWindows 64 Support\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2005\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_64mac.pdf\"\u003eLinux 64 and Macintosh PowerPC Support\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2005\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_appdeploy.pdf\"\u003eApplication Deployment\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2005\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_nlpsolvers.pdf\"\u003eGAMS/NLP Solvers\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2004\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_newsolvers.pdf\"\u003eModeling for the Real World: GAMS/Solvers\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2004\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_coin.pdf\"\u003eGAMS/COIN\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2004\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_qa.pdf\"\u003eQuality Assurance\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2004\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_bch.pdf\"\u003eBranch-and-Cut with Heuristics\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2004\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_qcp.pdf\"\u003eQuadratically Constrained Programs\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2004\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_stochastic.pdf\"\u003eStochastic Programming\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2003\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_global.pdf\"\u003eGlobal Solvers\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2003\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_general.pdf\"\u003eModeling for the Real World\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2003\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_mosek.pdf\"\u003eConic Programming in GAMS\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2003\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_lgo.pdf\"\u003eLGO Global Optimization Solver\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2003\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_nlpec.pdf\"\u003eMPEC Solver NLPEC\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2003\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_oqnlp.pdf\"\u003eMulti-Start Solver OQNLP\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2002\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_baron.pdf\"\u003eGlobal Optimization Solver BARON\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2002\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_conopt.pdf\"\u003eMulti-method Solver CONOPT\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2002\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_gamswww.pdf\"\u003eModel Types, Solvers, Platforms\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2002\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_mpsge.pdf\"\u003eGAMS World: MPSGE World\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2002\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_performance.pdf\"\u003eGAMS World: Performance World\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2002\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_translate.pdf\"\u003eGAMS World: GAMS Translation Service\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2001\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_mpecworld.pdf\"\u003eGAMS World: MPEC World\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2001\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_globalworld.pdf\"\u003eGAMS World: GLOBAL World\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2001\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_minlpworld.pdf\"\u003eGAMS World: MINLP World\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2001\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_eams.pdf\"\u003eEnterprise Academy Management System/Scheduler\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, April 2001\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_prophet.pdf\"\u003eSAT Prophet\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, February 2001\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_mesap.pdf\"\u003eMESAP/PROFAKO\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 2000\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_lg.pdf\"\u003eXOPT\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 2000\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_lm.pdf\"\u003eWaterTarget\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 2000\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_riskontrol.pdf\"\u003eRiskontroller\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, June 2000\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_hill.pdf\"\u003eUFEM-NPM Forecasting System\u003c/a\u003e\u003c/b\u003e; OR/MS-Today Advertisement, April 2000\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_cra.pdf\"\u003eInternational Impact Assessment Model (IIAM)\u003c/a\u003e\u003c/b\u003e; OR/MS-Today Advertisement, February 2000\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_starblend.pdf\"\u003eStarBlend application\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, December 1999\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_risklab.pdf\"\u003eRiskAdvisor system\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, October 1999\u003c/li\u003e\n\u003cli\u003e\u003cb\u003e\u003ca href=\"orms_markal.pdf\"\u003eMARKAL-MACRO model\u003c/a\u003e\u003c/b\u003e; OR/MS-today Advertisement, August 1999\u003c/li\u003e\n\u003c/ul\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"poster_coin.pdf\"\u003eInterfacing COIN-OR Solvers by GAMS\u003c/a\u003e\n\u003c/strong\u003e; GAMS-Poster\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"poster_baron.pdf\"\u003eGAMS/BARON\u003c/a\u003e\n\u003c/strong\u003e; GAMS-Poster\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"poster_pathnlp.pdf\"\u003eGAMS/PATHNLP\u003c/a\u003e\n\u003c/strong\u003e; GAMS-Poster\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"poster_sbb.pdf\"\u003eGAMS/SBB\u003c/a\u003e\n\u003c/strong\u003e; GAMS-Poster\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"poster_gams-x.pdf\"\u003eGAMS-X\u003c/a\u003e\n\u003c/strong\u003e; Poster from Collin Starkweather, Thomas Rutherford\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"poster_matlab_link.pdf\"\u003eGAMS/Matlab\u003c/a\u003e\n\u003c/strong\u003e; Poster from Michael C. Ferris\u003c/li\u003e\n\u003c/ul\u003e","ref":"/archives/ads/","title":"GAMS Advertisements"},{"body":"How to get evaluation licenses lorem Mandaremus concursionibus ut ingeniis, deserunt quorum iudicem constias. O quo enim export tamen. Probant se sint hic nescius exquisitaque ut senserit id de multos deserunt graviterque iis do ea elit commodo de incididunt fore offendit constias si ingeniis malis in eiusmod tractavissent, ab magna fugiat amet nescius sed velit incurreret e litteris.Quamquam cohaerescant aut doctrina. Non ut fugiat nisi minim, sunt eruditionem doctrina illum singulis id ut export officia fidelissimae, pariatur non tempor. E eu quae incurreret id de dolor nescius voluptatibus.\n","excerpt":"\u003ch3 id=\"how-to-get-evaluation-licenses\"\u003eHow to get evaluation licenses\u003c/h3\u003e\n\u003cp\u003elorem Mandaremus concursionibus ut ingeniis, deserunt quorum iudicem constias. O quo\nenim export tamen. Probant se sint hic nescius exquisitaque ut senserit id de\nmultos deserunt graviterque iis do ea elit commodo de incididunt fore offendit\nconstias si ingeniis malis in eiusmod tractavissent, ab magna fugiat amet\nnescius sed velit incurreret e litteris.Quamquam cohaerescant aut doctrina. Non\nut fugiat nisi minim, sunt eruditionem doctrina illum singulis id ut export\nofficia fidelissimae, pariatur non tempor. E eu quae incurreret id de dolor\nnescius voluptatibus.\u003c/p\u003e","ref":"/eval/","title":"GAMS Evaluation license"},{"body":"GAMS Flyer GAMS Studio Datasheet , August 2019 GAMS MIRO Datasheet , July 2019 GAMS MIRO Datasheet , March 2019 GAMS General Flyer , February 2019 ","excerpt":"\u003ch1\u003eGAMS Flyer\u003c/h1\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"0819_US_Datasheet_Studio.pdf\"\u003eGAMS Studio Datasheet\u003c/a\u003e\n\u003c/strong\u003e, August 2019\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"0719_US_MIRO_Datasheet.pdf\"\u003eGAMS MIRO Datasheet\u003c/a\u003e\n\u003c/strong\u003e, July 2019\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"0319_US_MIRO_DATASHEET.pdf\"\u003eGAMS MIRO Datasheet\u003c/a\u003e\n\u003c/strong\u003e, March 2019\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ca href=\"0219_general_flyer_GAMS.pdf\"\u003eGAMS General Flyer\u003c/a\u003e\n\u003c/strong\u003e, February 2019\u003c/li\u003e\n\u003c/ul\u003e","ref":"/archives/flyer/","title":"GAMS Flyer"},{"body":"GAMS Presentations INFORMS Annual Meeting 2019 Michael Bussieck; Solving Large-Scale GAMS Models on HPC platforms OR2019 in Dresden Frederik Fiand; Solving Large-Scale GAMS Models on HPC platforms Robin Schuchmann \u0026amp; Lutz Westermann; GAMS MIRO – An interactive web interface 2019 INFORMS Business Analytics Conference in Austin TX Franz Nelissen \u0026amp; Frederik Proske; Deploying GAMS Models with GAMS MIRO (Technology Workshop) 2019 CAPD Annual Review Meeting (Carnegie Mellon) in Pittsburgh, PA, USA Frederik Proske; Model deployment in GAMS IEWT 2019 Hermann von Westerholt; Solving Large-Scale Energy Systems Models INFORMS Annual Meeting 2018, November 4-7, Phoenix, US Steve Dirkse; Enhanced Model Deployment and Solution in GAMS Steve Dirkse and Lutz Westermann; GAMS - An Introduction INREC 2018, September 24-25, Essen, Germany Frederik Fiand; Solving Large-Scale Energy System Models OR2018, September 12-14 2018, Brussels, Belgium Lutz Westermann; Enhanced Model Deployment in GAMS Frederik Fiand; GAMS and High-Performance Computing OR2018 Pre-Workshop, September 11 2018, Brussels, Belgium Frederik Fiand and Lutz Westermann; GAMS – An Introduction INFORMS International Meeting, June 17-20 2018, Taipei, Taiwan Frederik Proske; Exam scheduling at United States Military Academy West Point Franz Nelissen; Computing in the Cloud and High Performance Computing with GAMS INFORMS Annual Meeting, October 22-25 2017, Houston, TX Michael Bussieck, Steve Dirkse, Fred Fiand, Lutz Westermann; Pre-Conference Workshops Part I: An Introduction to GAMS Part II: Stochastic Programming in GAMS Part III: The GAMS Object-Oriented API\u0026rsquo;s Part IV: Code Embedding in GAMS Lutz Westermann; Embedded Code in GAMS - Using Python as an Example Fred Fiand \u0026amp; Michael Bussieck; High Performance Computing with GAMS International Conference on Operations Research, September 06-08, 2017, Berlin, Germany Frederik Proske, Robin Schuchmann; Exam scheduling at United States Military Academy West Point Lutz Westermann; Embedded Code in GAMS – Using Python as an Example Fred Fiand, Michael Bussieck; High Performance Computing with GAMS Franz Nelissen; A distributed Optimization Bot/Agent Application Framework for GAMS Models Fred Fiand, Franz Nelissen, Lutz Westermann; Pre-Conference Workshops Part I: An Introduction to GAMS Part II: Stochastic programming in GAMS Part III: The GAMS Object-Oriented API\u0026rsquo;s Part IV: Code embedding in GAMS examples.zip International Conference on Operations Research, August 30 to September 2, 2016, Hamburg, Germany Frederik Fiand, Tim Johannessen; GAMS - An Introduction (Pre-Conference GAMS Workshop) Franz Nelissen; Solving Scenarios in the Cloud International Conference on Operations Research, September 1 to September 4, 2015, Vienna, Austria Tim Johannessen, Franz Nelissen; GAMS - An Introduction (Pre-Conference GAMS Workshop) ISMP 2015, July 12-17 2015, Pittsburgh, U.S.A. Stefan Vigerske; (MI)NLPLib 2 INFORMS Conference on Business Analytics \u0026 Operations Research, April 12-14 2015, Huntington Beach CA Franz Nelissen; Technology Workshop: The Role and Impact of Algebraic Modeling Languages in and on Industrial Optimization 93th meeting of the GOR working group Real World Mathematical Optimization, 93th GOR Meeting, Mathematical Optimization in Industry, November 27-28 201, Physikzentrum Bad Honnef, Germany Michael Bussieck, Franz Nelissen, Lutz Westermann: The Role and Impact of Algebraic Modeling Languages in and on Industrial Optimization International Conference on Operations Research, September 2 to September 5, 2014, Aachen, Germany Lutz Westermann; GAMS - An Introduction (Pre-Conference GAMS Workshop) Lutz Westermann; Recent Enhancements in GAMS Franz Nelissen; Design Principles that Make the Difference MAGO'14 - XII Global Optimization Workshop, September 1 to September 4, 2014, Malaga, Spain Stefan Vigerske; (MI)NLPLib 2 20th IFORS - The Art of Modeling, July 13-18 2014, Barcelona, Spain Toni Lastusilta; Recent Enhancements in GAMS GOR-Sitzung der Arbeitsgruppe Logistik und Verkehr, April 10-11 2014, Braunschweig, Germany Lutz Westermann; Multimodal Coal Transportation Model INFORMS Conference on Business Analytics and OR, March 30-April 01 2014, Boston MA Toni Lastusilta; Franz Nelissen; Technology Workshop: Design Principles that Make the Difference INFORMS Annual Meeting, October 06-09 2013, Minneapolis MN Steve Dirkse; Pre-Conference GAMS Workshop: The slides and the zip file containing the files to add the library Workshop Demonstration Models to the model library tab in the GAMS IDE. To add this model library to your GAMS system: Unzip the zip file in the GAMS system directory. This should create a directory demolib_ml next to the pre-existing datalib_ml, etc. With a text editor, update the file idecfg.ini in the GAMS system directory to make the IDE aware of this new library. Just follow the same pattern as the other libraries there. Steve Dirkse; Software Demonstration: The organization and content were very similar to the workshop mentioned above. International Conference on Operations Research, September 3 to September 6, 2013, Rotterdam, The Netherlands Lutz Westermann, Clemens Westphal; Pre-Conference GAMS Workshop at OR 2013: Part I: Features you might not know about Part II: Object Oriented GAMS API, Example Code Michael Bussieck; Open-source Quality Assurance and Performance Analysis Tools Lutz Westermann; Recent Enhancements in GAMS Clemens Westphal; Object Oriented GAMS API: Java, Python and .NET 26th EURO - INFORMS Joint International Conference, July 01-04 2013, Rome, Italy Toni Lastusilta; Recent Enhancements in GAMS Michael Bussieck; Open-source Quality Assurance and Performance Analysis Tools INFORMS Conference on Business Analytics and OR, April 07-09 2013, San Antonio TX Franz Nelissen; Technology Workshop: Deploying Optimization Applications - Concepts and Challenges - Franz Nelissen, Lutz Westermann; Software Tutorial: Deploying Your ApplicationBuilt Around GAMS , C# - example files (VS 2010) 13th INFORMS Computing Society Conference (ICS), January 06-08 2013, Santa Fe, New Mexico, USA Steven Dirkse, Michael C. Ferris, Renger. van Nieuwkoop: gdxrrw: Exchanging Data Between GAMS and R. The source for the talk, including all GAMS source, R source, etc., and a little README describing how to generate it with knitr and beamer, is available in this zip file. To generate the talk and run the latest examples, you will need the most recent version of GDXRRW (0.2.0) and a version of GAMS at least as recent as 23.9.4. Stefan Vigerske, Michael Bussieck, Steven Dirkse: Advanced Use of GAMS Solver Links The scripts to demonstrate usage of the solvetrace files are available in this zip file. You will need at least GAMS 24.0.2 to generate correct solve trace files. 89th meeting of the GOR working group Real World Mathematical Optimization, Hybrid Methods, November 15-16 2012, Physikzentrum Bad Honnef, Germany Michael Bussieck, Alex Meeraus, Lutz Westermann: Rapid Prototyping of Decomposition Algorithms Frederik Fiand: A Student Administration and Scheduling System for Federal Law Enforcement Training Center INFORMS 2012, Informatics Rising, October 14-17 2012, Phoenix , AZ Steven Dirkse; Michael Bussieck; Pre-Conference GAMS Workshop at INFORMS 2012: Part I: Development, Example Code Part II: Deployment Michael Bussieck; Object Oriented GAMS API: .NET and Beyond Steven Dirkse; gdxrrw: Exchanging Data Between GAMS and R. The source for the talk, including all GAMS source, R source, etc., and a little README describing how to generate it with knitr and beamer, is available here. To generate the talk and run the latest examples, you will need the most recent versions of GDXRRW (0.2.0) and GAMS (23.9.4). Lutz Westermann; Deploying Your Application Built Around GAMS and Example Code Lutz Westermann; Stochastic Programming in GAMS International Conference on Operations Research, September 4 to September 7, 2012, Hannover, Germany Toni Lastusilta; Extrinsic functions in GAMS Lutz Westermann; Stochastic Programming in GAMS Clemens Westphal; Object Oriented GAMS API - .NET and Beyond Lutz Westermann and Clemens Westphal; Pre Conference Workshop - Stochastic Programming and Object Oriented GAMS API FORA-Symposium 2012, July 06 2012, Aachen, Germany Franz Nelissen; Models and Their Roles. INFORMS International Conference, June 23-27 2012, Beijing, China Toni Lastusilta, Alexander Meeraus, Franz Nelissen; Software Tutorial at the INFORMS 2012: Fundamentals and Recent Developments of the GAMS System 88th GOR Meeting, Challenges in Energy Economics and Optimization at EnBW, April 19-20, 2012, Karlsruhe, Germany Michael R. Bussieck; Rapid Prototyping of Decomposition Algorithms. Example GAMS models for download (gor88.zip) INFORMS Conference on Business Analytics and OR, Applying Science to the Art of Business, April 15-17 2012, Huntington Beach CA Franz Nelissen; Technology Workshop at the INFORMS 2012: Fundamentals and Recent Developments of the GAMS System Pete Steacy; Software Tutorial at the INFORMS 2012: GAMS Past-Present-Future and GAMS Features You Might Not Know International Conference on Operations Research, August 30 to September 2, 2011, Zurich, Switzerland Michael Bussieck; A Planning Tool for a Municipal Utility Company Lutz Westermann; Recent Enhancements in GAMS INFORMS Computing Society 2011, January 9th-11th 2011, Monterey, CA Steven Dirkse; GMO: GAMS' Next-Generation Model API and Millionaire Quiz Show Michael Ferris; An Extended Mathematical Programming Framework 85th meeting of the GOR working group, \"Real World Mathematical Optimization\", November 18+19 2010, Physikzentrum Bad Honnef, \"Modeling Languages in Mathematical Optimization \u0026ndash; Overview, Opportunities and Challenges in Application Development\" Jan-H. Jagla, Alex Meeraus: GAMS Past-Present-Future and GAMS features you might not know INFORMS 2010, Energising the Future, November 7-10 2010, Austin , TX Steven Dirkse; Michael Bussieck; Pre-Conference GAMS Workshop at INFORMS 2010: GAMS - How can I make this work ??? Arrgghh!! Steven Dirkse; GAMS Development Corporation - Rapid Application Prototyping with GAMS Michael Bussieck; Keep the Model Hot: A Scenario Solver for GAMS Michael Bussieck; Stochastic Optimization: Recent Enhancements in Algebraic Modeling Systems Steven Dirkse; GMO: GAMS' Next-Generation Model API International Conference Operations Research, Mastering Complexity, September 1st-3rd 2010, Universit\u0026auml;t der Bundeswehr M\u0026uuml;nchen, Germany Jan-Hendrik Jagla; Lutz Westermann; Pre-Conference GAMS Workshop: Presentation: GAMS - How can I make this work... arrgghh? Demo models: Demo Session \"Stochastic Programming Software\" Timo Lohmann; Stochastic Programming using Algebraic Modeling Languages Michael Bussieck; Recent Enhancements in Algebraic Modeling Systems Session \"Software\" Lutz Westermann; Recent Enhancements in GAMS Jan-Hendrik Jagla; Interactions between a Modeling System and Advanced Solvers Franz Nelissen; Using Utility Computing to provide Mathematical Programming Resources Michael Bussieck; Alex Meeraus; Franz Nelissen; Algebraic Modeling: Past, Present and Future 24th European Conference on Operational Research, July 11th-14th 2010, University of Lisbon, Portugal Jan-Hendrik Jagla; Lutz Westermann; Pre-Conference GAMS Workshop: Presentation: GAMS - How can I make this work... arrgghh? Demo models: Demo Lutz Westermann; Stochastic Optimization: Recent Enhancements in Algebraic Modeling Systems. Jan-Hendrik Jagla; Recent enhancements in GAMS. CPAIOR 2010, International Conference on Integration of AI and OR Techniques in Constraint Programming, June 14-18, 2010, Bologna, Italy Jan-H. Jagla; Alex Meeraus; GAMS - Benchmarking \u0026 Quality Assurance 83rd Working Group Meeting Real World Optimization, November 19-20, 2009, Workshop 'Mathematical Optimization in Transportation -Airline, Public Transport, Railway-', Bad Honnef, Germany Michael Bussieck; Column Generation in GAMS - Extending the GAMS Branch-and-Cut-and-Heuristic (BCH) Facility INFORMS 2009, INFORMing the Globe, October 11th-14th 2009, San Diego, CA Michael Bussieck; Lutz Westermann; Pre-Conference GAMS Workshop at INFORMS 2009: Module GAMS - General Algebraic Modeling System Module GAMS - Transportation Model Steven Dirkse; GDXMRW: Exchanging Data Between GAMS and Matlab Lutz Westermann; Rapid application prototyping with GAMS and Demo Library Alexander Meeraus; GAMS - Features you might not know about Michael Bussieck; GAMS Branch-and-cut and Heuristic Facility Paul van der Eijk; GAMS Data Exchange (GDX) Tools and Utilities Erwin Kalvelagen; Data and Software Interoperability with GAMS: A User Perspective 23rd European Conference on Operational Research, July 5th-8th 2009, University of Siegen, Germany Michael Bussieck; Jan-Hendrik Jagla; Pre-Conference GAMS Workshop at EURO 2009: Module GAMS - General Algebraic Modeling System Module GAMS - Transportation Model Franz Nelissen; Using Utility Computing to provide Mathematical Programming Resources. Alexander Meeraus; GAMS � Features you might not know about. Michael Bussieck; Stochastic Optimization: Recent Enhancements in Algebraic Modeling Systems. Lutz Westermann; Rapid application prototyping with GAMS and Demo Library. Jan-Hendrik Jagla; Formulating and solving non-standard model types using gams/emp. INFORMS Computing Society 2009, January 11th-13th 2009, Charleston, SC Steven Dirkse; GAMSWorld and the growing demand for reproducible computational experiments INFORMS 2008, OR goes to Washington, October 12th-15th 2008, Washington, DC Michael Bussieck; Jan-Hendrik Jagla; Lutz Westermann; Pre-Conference GAMS Workshop: Module GAMS - General Algebraic Modeling System Module GAMS - Sudoku and Workshop Library Steven Dirkse; GAMS Development Corporation - Rapid Application Prototyping with GAMS and Demo Library. Session: \"Using COIN-OR via GAMS\" Alexander Meeraus; Open-source Quality Assurance and Performance Analysis Tools. Michael Bussieck; GAMS Branch-and-Cut \u0026 Heuristic Facility. Stefan Vigerske; Hooking Your Solver to GAMS. Steven Dirkse; Holger Heitsch; Scenario Tree Generation for Stochastic Programming Models using GAMS/SCENRED. Operations Research and Global Business, September 3rd-5th 2008, University of Augsburg, Germany Michael Bussieck; Jan-Hendrik Jagla; Lutz Westermann; Pre-Conference GAMS Workshop at OR 2008: Module GAMS - General Algebraic Modeling System Module GAMS - Model Development - Using CHP as an example and Workshop Library Module GAMS - Grid Computing Jan-Hendrik Jagla; Michael Ferris; Alex Meeraus; GAMS' Extended Mathematical Programming Framework. Franz Nelissen; Is Utility Computing suitable for providing Mathematical Programming Resources?. IFORS 2008, Operational Research: developing communities, managing the connections amongst them. July 2008, Sandton, South Africa Jan-Hendrik Jagla; Lutz Westermann; Pre-Conference GAMS Workshop at IFORS 2008: Module GAMS - General Algebraic Modeling System Module Model Development - Using CHP as an example and Workshop Library Lutz Westermann; Michael Bussieck; Global Optimization with GAMS. Jan-Hendrik Jagla; Michael Ferris; Alex Meeraus; Extended Mathematical Programming in GAMS. Jan-Hendrik Jagla; Lutz Westermann; Recent Enhancements in GAMS. APMOD 2008, Bratislava, Slovak Republic Alexander Meeraus; Franz Nelissen (Preconference Workshop): GAMS - An Introduction Franz Nelissen: Is Utility Computing suitable for providing Mathematical Programming Resources? 80th GOR Practice Meeting, Optimization in Manufacturing Execution Systems, April 2008, Ladenburg, Germany Jan-Hendrik Jagla; Manufacturing - Is there a Role for Algebraic Modeling Systems?. INFORMS November 2007, Preconference Workshop, Seattle, WA Jan-Hendrik Jagla; Lutz Westermann; GAMS Workshop at INFORMS: Module Introduction Module Transportation Model Module Interfacing with other Applications Module Grid Computing Module Benchmarking INFORMS November 2007, Session: \"Software Demonstration\", Seattle, WA Steven Dirkse; Rapid Application Prototyping using GAMS. INFORMS November 2007, Session: \"Optimizing Your Optimizer\", Seattle, WA Michael Bussieck, Steven Dirkse, Jan-H. Jagla, Alex Meeraus; Interactions between Modeling Systems and Advanced Solvers. INFORMS November 2007, Session: \"ICS 10th Anniversary Celebration and Quiz Show\", Seattle, WA Steven Dirkse; Panel Discussion and Quiz Show. INFORMS November 2007, Session: \"Interfacing to Coin-OR\", Seattle, WA Michael Bussieck, Steven Dirkse, Stefan Vigerske; COIN-OR/GAMSlinks: Hooking Your Solver to GAMS. Steven Dirkse; Solver Panel: Solver Independence GOR Workshop October 2007, Global Optimization, Bad Honnef, Germany Michael Bussieck, Steven Dirkse, Alex Meeraus; Global Optimization with GAMS. OR September 2007, Pre-Conference Workshop, Saarbr\u0026uuml;cken, Germany Jan-H. Jagla, Lutz Westermann; OR2007 GAMS Workshop. OR September 2007, Session: \"Continuous Optimization, COIN-OR: Open Source Software for Operations Research\", Saarbr\u0026uuml;cken, Germany Stefan Vigerske, Michael Bussieck; Interfacing COIN-OR solvers by GAMS. OR September 2007, Session: \"Continuous Optimization, Modern Algorithms and Software for Continuous Optimization\", Saarbr\u0026uuml;cken, Germany Lutz Westermann, Michael Bussieck; Global Optimization with GAMS. Jan-H. Jagla, Michael Bussieck; GAMS Recent Enhancements. OR September 2007, Session: \"Finance, Banking and Insurance, Applications in Finance\", Saarbr\u0026uuml;cken, Germany Franz Neli\u0026szlig;en; Grid Computing in Finance using an Algebraic Modeling System. INFORMS Intl July 2007, Session: \"Benchmarking Solvers: Who, What, When, Why, How, How Much?\", Puerto Rico Steven Dirkse; Preliminary Remarks. INFORMS Intl July 2007, Session: \"Software Seminars\", Puerto Rico Steven Dirkse, Alex Meeraus; Rapid Application Prototyping with GAMS INFORMS Intl July 2007, Session: \"Using Modeling Languages with COIN-OR\", Puerto Rico Steven Dirkse; Quality Assurance, Performance Analysis, and the GAMS/COIN-OR Solvers. EURO July 2007, Session: \"Preconference Workshop\", Prague, Czech Republic Jan-H. Jagla; GAMS - Workshop. EURO July 2007, Session: \"Mixed Integer Programming: The State-of-the-Art\", Prague, Czech Republic Michael Bussieck, Jan-H. Jagla, Stefan Vigerske; Performance of COIN-OR Solvers for the Solution of MINLPs Using GAMS. EURO July 2007, Session: \"COIN-OR: Open Source Software for OR II\", Prague, Czech Republic Michael Bussieck, Jan-H. Jagla, Stefan Vigerske; Hooking Your Solver to GAMS. EURO July 2007, Session: \"Software Presentation\", Prague, Czech Republic Jan-H. Jagla; Rapid Development of Optimization-based Decision Support Applications. 78th GOR Meeting, Stochastic Optimization in the Energy Industry, April 2007, Aachen, Germany Michael R. Bussieck; Stochastic Optimization: Solvers and Tools. Annual Review Meeting, Center for Advanced Process Decision Making (CAPD) March 2007, CMU Pittsburgh, PA Jan-H. Jagla, Lutz Westermann; GAMS: Productivity - Performance - Reliability. INFORMS November 2006, Preconference Workshop, Pittsburgh, PA Steven Dirkse; Michael Bussieck; GAMS Workshop at INFORMS. INFORMS November 2006, Session: \"Software Demonstration\", Pittsburgh, PA Steven Dirkse; Michael Bussieck; Rapid Application Prototyping using GAMS. INFORMS November 2006, Session: \"Alan S. Manne (1925-2005): Innovative Energy, Environment and Operations Modeler\", Pittsburgh, PA Thomas Rutherford; Alan S. Manne: Innovative Approaches to Climate Policy Design. INFORMS November 2006, Session: \"High Throughput Optimization\", Pittsburgh, PA Michael Ferris; Thomas Rutherford; Decomposition and High Throughput Solution of Equilibrium Problems. INFORMS November 2006, Session: \"The Science of (Optimizing) Better\", Pittsburgh, PA Steven Dirkse; Michael Bussieck; Armin Pruessner; Performance Analysis of Grid-Enabled GAMS. INFORMS November 2006, Session: \"Cyberinfrastructure and Integer Programming\", Pittsburgh, PA Michael Ferris; Michael Bussieck; Solving Difficult Mixed-Integer Programming Problems Using GAMS and Condor. OR September 2006, Pre-Conference Workshop, Karlsruhe, Germany Michael R. Bussieck, Franz Neli\u0026szlig;en; OR2006 GAMS Workshop. OR September 2006, Session: Software Presentation, Karlsruhe, Germany Franz Neli\u0026szlig;en; Rapid Application Prototyping with GAMS. OR September 2006, Session: Stochastic Programming, Karlsruhe, Germany Franz Neli\u0026szlig;en; Portfolio Optimization: A Technical Perspective. OR September 2006, Session: Optimization Tools in Progress, Karlsruhe, Germany Michael R. Bussieck, Michael C. Ferris; Solving Difficult MIP Problems using GAMS and Condor. EURO July 2006, Stream: Software for OR/MS, Reykjavik, Iceland Michael R. Bussieck, Michael C. Ferris; Solving Difficult MIP Problems using GAMS and Condor. APMOD June 2006, Session: OR Software; New Developments, Madrid, Spain Franz Neli\u0026szlig;en; Rapid Application Prototyping using GAMS. CORS/OPTIMIZATION DAYS May 2006, Montreal, Canada Alex Meeraus, Michael C. Ferris; High Throughput Computing and Sampling Issues for Optimization in Radiotherapy. INFORMS November 2005, Session \"Software Demo\", San Francisco, CA Michael Bussieck, Steven Dirkse; Rapid Application Development \u0026 Grid Computing Using GAMS. Also see the accompanying handout. INFORMS November 2005, Session \"INFORMS Computing Society/ Heuristic Search\", San Francisco, CA Monique Guignard, Michael Bussieck, Alexander Meeraus, Fred O'Brien, Siqun Wang; A Student-centric Class and Exam Scheduling System at West Point. GOR Workshop October 2005, \"Optimization under Uncertainty\", Bad Honnef, Germany Franz Neli\u0026szlig;en; Optimization under Uncertainty using GAMS: Success Stories and some Frustrations Alex Meeraus, Thomas F. Rutherford; Mixed Complementarity Formulations of Stochastic Equilibrium Models with Recourse ICS January 2005, Session \"Network Services and Communication Standards\", Annapolis, MD Steven P. Dirkse; Linking GAMS to Solvers using the COIN-OR Open Solver Interface ICS January 2005, Session \"SolverSession II - Integer/Combinatorial\", Annapolis, MD Ignacio E. Grossmann; Computational Experience Solving Disjunctive Programming Problems with LOGMIP ICS January 2005, Session \"Quality\", Annapolis, MD Armin Pruessner; Software Quality Assurance for Mathematical Modeling Systems Workshop on Integer Programming and Continuous Optimization, November 2004, Chemnitz, Germany Marc C. Steinbach; NLP Reformulation of MINLP under Nonlinear Network Dynamics. Sven Leyffer; A Survey of Mixed Integer Nonlinear Optimization. Alexander Martin; Approximation of non-linear functions in mixed integer programming. Mohit Tawarmalani; Convexification and Global Optimization of Nonlinear Programs. INFORMS October 2004, Session \"COIN-OR: Open-Source Software for OR\", Denver Michael Bussieck, Steven Dirkse; Linking GAMS to Solvers Using COIN-OSI. INFORMS October 2004, Session \"Software Presentations\", Denver Armin Pruessner, Alexander Meeraus; GAMS: A High Performance Modeling System for Large-Scale Modeling Applications. Also see the accompanying handout. GOR Workshop on \"Mathematical Optimization Services in Europe\", October 2004, Bad Honnef, Germany Michael Bussieck, Franz Neli\u0026szlig;en; Models and Their Roles GOR/NGB Joint Conference September 2004, Tilburg, The Netherlands Stephan Eidt, Franz Neli\u0026szlig;en; Quality Assurance and Algebraic Modeling Systems CORS/INFORMS Joint International Meeting May 2004, Banff, Canada Leon Lasdon; Global Optimization and the GAMS Branch-and-Cut Facility INFORMS October 2003, Session \"Optimization Software - The State of the Art\", Atlanta Armin Pruessner; Conic Programming in GAMS INFORMS October 2003, Session \"Global Optimization Software in GAMS: Performance and Applications\", Atlanta Leon Lasdon; OQNLP/GAMS: A Multi-start Approach to Global Optimization J\u0026aacute;nos D. Pint\u0026eacute;r; GAMS/LGO Solver Engine for Global and Convex Optimization Nick Sahinidis, Mohit Tawarmalani; Global Optimization with GAMS/BARON Michael Bussieck, Leon Lasdon, Nick Sahinidis,J\u0026aacute;nos D. Pint\u0026eacute;r; Global Optimization with GAMS - Applications and Performance INFORMS October 2003, Session \"Optimization Modeling in Practice I\" Steven Dirkse; Applications of MPEC Models INFORMS October 2003, Session \"Optimization Modeling and Techniques\" Alex Meeraus; Rapid Implementation of Branch-and-Cut with Heuristics using GAMS Workshop on Optimization/Modeling/Applications - 15 Years GAMS Development - 60 Years Alex Meeraus; September 2003, Washington, DC David Kendrick; Themes from GAMS in Computational Economics Jan Bisschop; Is there a Future for Modeling Systems Bruce McCarl; GAMS as a Tool for Applied Analysis: Significance, and the Future Arne Drud; Replaceable Solvers in GAMS Alex Meeraus; Past, Present, and Future Lloyd Kelly; The Risks, They Just Keep On Coming Sherman Robinson; Computable General Equilibrium (CGE) Modeling in GAMS: History and Current State of the Art Michael C. Ferris; Optimization of Gamma Knife Radiosurgery Alan Manne; Integrated Assessment for Global Climate Change GAMS Global Optimization Workshop, September 2003, Washington, DC Nick V. Sahinidis; Global Optimization with Baron J\u0026aacute;nos D. Pint\u0026eacute;r; Global Optimization with GAMS/LGO: Introduction, Usage, and Applications Leon S. Lasdon; OQNLP: a Scatter Search Multistart Approach for Solving Constrained Non-Linear Global Optimization Problems John Chinneck; MProbe: Mathematical Program Probe GOTI2003 September, Argonne, IL Steven Dirkse; Challenges in Bringing Global Optimization to the Marketplace OR2003 September, Heidelberg, Germany Franz Nelissen; Mathematical Optimization in Finance: Closing the gap ISMP August 2003, Copenhagen, Denmark, Session \"NLP software - state of the art I\" Arne S. Drud; Detecting Unboundedness in Practical Nonlinear Models Steven P. Dirkse; Mathematical Programs with Equilibrium Constraints: Automatic Reformulation and Solution via Constrained Optimization Michael R. Bussieck, Leon S. Lasdon, J\u0026aacute;nos D. Pint\u0026eacute;r, Nick V. Sahinidis; Global Optimization with GAMS - Applications and Performance ISMP August 2003, Copenhagen, Denmark, Session \"NLP software - state of the art II\" Armin Pruessner, Hans D. Mittelmann; Automated Performance Analysis in the Evaluation of Nonlinear Programming Solvers IEEE Bologna Power Tech, June 2003 Nicole Gr\u0026ouml;we-Kuska, Holger Heitsch, Werner R\u0026ouml;misch; Scenario Reduction and Scenario Tree Construction for Power Management Problems; GOR Workshop on Modeling Languages April 23-25 2003, Bad Honnef Josef Kallrath; Introduction: Models, Model Building, and Mathematical Optimization - The Importance of Modeling Languages for Solving Real-World Problems Hermann Schichl; Modeling Languages and Global Optimization Sofiane Oussedik; ILOG OPL Studio - Technical Overview Jan Bisschop; AIMMS - An All-Round Development Environment Robert Fourer, David Gay; AMPL - A Modeling Language for Mathematical Programming M.R. Bussieck, A. Meeraus; GAMS - General Algebraic Modeling System Bob Daniel; XPRESS - Mosel with an additional handout Klaus Schittkowski; PCOMP - A Modeling Language for Nonlinear Programs with Automatic Differentiation INFORMS November 2002, Session \"Benchmarking \u0026 Performance Testing of Optimization Software\" Armin Pruessner; Automated Performance Testing and Analysis Hans D. Mittelmann; Benchmarking Large-Scale Optimization Software Arne S. Drud; Testing and Tuning a new Solver Version using Performance Tests INFORMS November 2002, Session \"Advances in Algebraic Modeling\" Alex Meeraus; Who Needs the GAMS World? Arne S. Drud; Novel Problem Types and Algorithms Thomas Rutherford; Economic Equilibrium Analysis with GAMS/MPSGE Hans D. Mittelmann; An Online Forum for Performance Testing INFORMS November 2002, Session \"New Initiatives in Global Optimization\" Michael R. Bussieck, Steven P. Dirkse, Alexander Meeraus, Armin Pruessner; Quality Assurance and Global Optimization; (PAVER results for the Example) Cocos'02 (Global Constrained Optimization and Constraint Satisfaction), October 2002, France Nick Sahinidis; Global Optimization and Constraint Satisfaction: The Branch-and-Reduce Approach Michael R. Bussieck, Arne S. Drud, Alexander Meeraus; Quality Assurance and Global Optimization; (PAVER results for Example 2) ExxonMobil Optimization and Logistics Mini-Symposium, Annandale, NJ, August 2002 Michael R. Bussieck, Alexander Meeraus; General Algebraic Modeling System: GAMS ESCAPE-12 (European Symposium on Computer Aided Process Engineering), 26 - 29 May 2002, The Hague, The Netherlands Y. Ota, K. Namatame, H. Hamataka, K. Nakagawa and H. Abe (Japan), A. Cervantes, I.B. Tjoa and F. Valli (USA); Award Winning Presentation: Optimization of Naphtha Feedstock Blending for Integrated Olefins-Aromatics Plant Production Scheduling SIAM Optimization Meeting, May 2002, Toronto Arne S. Drud; On the Use of Second Order Information in GAMS/CONOPT3 INFORMS November 2001, Session \"Quadratic Assignment \u0026 Related Problems\" Michael R. Bussieck, Monique Guignard, Siqun Wang; Modeling with Quadratic Constraints to Improve Exam Timetabling Solutions OR 2001, Section \"Energy and Environment\", Prof. Dr. R\u0026uuml;diger Schultz, Gerhard-Mercator-University of Duisburg, Dr. Wilfried Pohl, RWE Systems AG Gary A. Goldstein, Ad Seebregts; Energy/Environmental Modeling with the Markal Family of Models Gary A. Goldstein, Uwe Remme, Ulrich Schellmann, Christoph Schlenzig; MESAP/Times-Advanced Decision Support for Energy and Environmental Planning OR 2001, Section \"Continuous Optimization\", Prof. Dr. Friedrich Juhnke, Otto-von-Guericke-University of Magdeburg, Prof. Dr. Florian Jarre, Heinrich-Heine-University of D\u0026uuml;sseldorf Michael R. Bussieck, Arne Drud; SBB: A New Solver for Mixed Integer Nonlinear Programming Arne Drud; Solving NLP Models in a Branch \u0026 Bound Context Steven Dirkse, Michael C. Ferris; Solving NLP Models as Complementary Problems Michael R. Bussieck, Alexander Meeraus; Global Optimization Initiative Michael C. Ferris, Jeffrey Horn; Automatic Conversion of Nonlinear Programs to Complementary Problems Michael C. Ferris, Meta Voelker; Slice Models in GAMS OR 2001, Section \"Discrete and Combinatorial Optimization\", Prof. Dr. Alexander Martin, Technical University of Darmstadt, Prof. Dr. Peter Brucker, University of Osnabr\u0026uuml;ck Alexander Meeraus, Frederick P. O'Brien; Class Scheduling at the USMA at West Point Michael R. Bussieck, Frederick P. O'Brien; Term End Exam Scheduling Michael C. Ferris, David M. Shepard; Optimization of Gamma Knife Radiosurgery INFORMS May 2001, \"Optimizing the Extended Enterprise in the New Economy\" Jos\u0026eacute; Vicente Caixeta-Filho, Jan Maarten van Swaay-Neto, Antonio de P\u0026aacute;dua Wagemaker; Optimization of the Production Planning and Trade of Lily Flowers at Jan de Wit Co. INFORMS Fall 2000, Session \"Recent Advances in Nonlinear Mixed Integer Optimization\" Ignacio E. Grossmann, Aldo Vecchietti; Recent Developments in Disjunctive Programming Michael R. Bussieck, Arne Drud; SBB: A New Solver for Mixed Integer Nonlinear Programming Mohit Tawarmalani, Nickolaos V. Sahinidis; Mixed Integer Nonlinear Programs; Theory, Algorithms and Applications INFORMS Fall 2000, Session \"Optimization and Modeling for Human Experts\" Fred O'Brien, Michael R. Bussieck, Alexander Meeraus; Academic Scheduling at United States Military Academy West Point, NY David M. Shepard, Michael C. Ferris; Computerized Treatment Planning for Stereotactic Radiosurgery Michael R. Bussieck, Lloyd R. Kelly; Blending Data, Models and Human Expertise INFORMS Fall 2000, Session \"Novel Applications in the Energy Industry\" Gary Goldstein; Assessing Energy/Economy/Environment Interaction Using the MARKAL Family of Models Lloyd R. Kelly; The Utility Fuel Economics - National Power Model Forecasting System Josef H. Bogensperger; Weekly Asset Portfolio Management in a Hydro-Thermal Power System ICCP 1999, Madison Steven Dirkse; Complementarity at GAMS Development SP 1998, Vancouver, Canada Steven Dirkse; Stochastic Programming using GAMS ","excerpt":"\u003ch1\u003eGAMS Presentations\u003c/h1\u003e\n\u003cul\u003e\n\u003cli\u003eINFORMS Annual Meeting 2019\n\u003cul\u003e\n\u003cli\u003eMichael Bussieck; \u003cstrong\u003e\u003ca href=\"GAMS_HPC_INFORMS2019.pdf\"\u003eSolving Large-Scale GAMS Models on HPC platforms\u003c/a\u003e\n\u003c/strong\u003e\n\u003cbr\u003e\u003cbr\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003eOR2019 in Dresden\n\u003cul\u003e\n\u003cli\u003eFrederik Fiand; \u003cstrong\u003e\u003ca href=\"GAMS_HPC_OR2019.pdf\"\u003eSolving Large-Scale GAMS Models on HPC platforms\u003c/a\u003e\n\u003c/strong\u003e\u003c/li\u003e\n\u003cli\u003eRobin Schuchmann \u0026amp; Lutz Westermann; \u003cstrong\u003e\u003ca href=\"MIRO_talk_dresden.pdf\"\u003eGAMS MIRO – An interactive web interface\u003c/a\u003e\n\u003c/strong\u003e\n\u003cbr\u003e\u003cbr\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e2019 INFORMS Business Analytics Conference in Austin TX\n\u003cul\u003e\n\u003cli\u003eFranz Nelissen \u0026amp; Frederik Proske; \u003cstrong\u003e\u003ca href=\"Technology_Workshop_Austin_2019.pdf\"\u003eDeploying GAMS Models with GAMS MIRO (Technology Workshop)\u003c/a\u003e\n\u003c/strong\u003e\n\u003cbr\u003e\u003cbr\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003e2019 CAPD Annual Review Meeting (Carnegie Mellon) in Pittsburgh, PA, USA\n\u003cul\u003e\n\u003cli\u003eFrederik Proske; \u003cstrong\u003e\u003ca href=\"2019_CAPD_GAMS-MIRO_FP.pdf\"\u003eModel deployment in GAMS \u003c/a\u003e\n\u003c/strong\u003e\n\u003cbr\u003e\u003cbr\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003eIEWT 2019\n\u003cul\u003e\n\u003cli\u003eHermann von Westerholt; \u003cstrong\u003e\u003ca href=\"2019_IEWT.pdf\"\u003eSolving Large-Scale Energy Systems Models\u003c/a\u003e\n\u003c/strong\u003e\n\u003cbr\u003e\u003cbr\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003eINFORMS Annual Meeting 2018, November 4-7, Phoenix, US\n\u003cul\u003e\n\u003cli\u003eSteve Dirkse; \u003cstrong\u003e\u003ca href=\"2018_informs_phoenix_Enhanced-Model-Dep_SD.pdf\"\u003eEnhanced Model Deployment and Solution in GAMS\u003c/a\u003e\n\u003c/strong\u003e\u003c/li\u003e\n\u003cli\u003eSteve Dirkse and Lutz Westermann; \u003cstrong\u003e\u003ca href=\"2018_informs_phoenix_GAMS-introduction_SD_LW.pdf\"\u003eGAMS - An Introduction\u003c/a\u003e\n\u003c/strong\u003e\n\u003cbr\u003e\u003cbr\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003eINREC 2018, September 24-25, Essen, Germany\n\u003cul\u003e\n\u003cli\u003eFrederik Fiand; \u003cstrong\u003e\u003ca href=\"2018_INREC_Solving-Large-Scale-ESM_FF.pdf\"\u003eSolving Large-Scale Energy System Models\u003c/a\u003e\n\u003c/strong\u003e\n\u003cbr\u003e\u003cbr\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003eOR2018, September 12-14 2018, Brussels, Belgium\n\u003cul\u003e\n\u003cli\u003eLutz Westermann; \u003cstrong\u003e\u003ca href=\"2018_OR_GAMS-and-EMD_LW.pdf\"\u003eEnhanced Model Deployment in GAMS\u003c/a\u003e\n\u003c/strong\u003e\u003c/li\u003e\n\u003cli\u003eFrederik Fiand; \u003cstrong\u003e\u003ca href=\"2018_OR_GAMS-and-HPC_FF.pdf\"\u003eGAMS and High-Performance Computing\u003c/a\u003e\n\u003c/strong\u003e\n\u003cbr\u003e\u003cbr\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003eOR2018 Pre-Workshop, September 11 2018, Brussels, Belgium\n\u003cul\u003e\n\u003cli\u003eFrederik Fiand and Lutz Westermann; \u003cstrong\u003e\u003ca href=\"2018_OR_GAMS-General_LW.pdf\"\u003eGAMS – An Introduction\u003c/a\u003e\n\u003c/strong\u003e\n\u003cbr\u003e\u003cbr\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003eINFORMS International Meeting, June 17-20 2018, Taipei, Taiwan\n\u003cul\u003e\n\u003cli\u003eFrederik Proske; \u003cstrong\u003e\u003ca href=\"2018_informs_taipei_FP.pdf\"\u003eExam scheduling at United States Military Academy West Point\u003c/a\u003e\n\u003c/strong\u003e\u003c/li\u003e\n\u003cli\u003eFranz Nelissen; \u003cstrong\u003e\u003ca href=\"2018_informs_taipei_FN.pdf\"\u003eComputing in the Cloud and High Performance Computing with GAMS\u003c/a\u003e\n\u003c/strong\u003e\n\u003cbr\u003e\u003cbr\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003eINFORMS Annual Meeting, October 22-25 2017, Houston, TX\n\u003cul\u003e\n\u003cli\u003eMichael Bussieck, Steve Dirkse, Fred Fiand, Lutz Westermann; Pre-Conference Workshops\n\u003cul\u003e\n\u003cli\u003ePart I: \u003cstrong\u003e\u003ca href=\"informs2017_workshop1_introduction.pdf\"\u003eAn Introduction to GAMS\u003c/a\u003e\n\u003c/strong\u003e\u003c/li\u003e\n\u003cli\u003ePart II: \u003cstrong\u003e\u003ca href=\"informs2017_workshop2_stochastic_programming.pdf\"\u003eStochastic Programming in GAMS\u003c/a\u003e\n\u003c/strong\u003e\u003c/li\u003e\n\u003cli\u003ePart III: \u003cstrong\u003e\u003ca href=\"informs2017_workshop3_OO_APIs.pdf\"\u003eThe GAMS Object-Oriented API\u0026rsquo;s\u003c/a\u003e\n\u003c/strong\u003e\u003c/li\u003e\n\u003cli\u003ePart IV: \u003cstrong\u003e\u003ca href=\"informs2017_workshop4_code_embedding.pdf\"\u003eCode Embedding in GAMS\u003c/a\u003e\n\u003c/strong\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003eLutz Westermann; \u003cstrong\u003e\u003ca href=\"informs2017_EmbeddedCode.pdf\"\u003eEmbedded Code in GAMS - Using Python as an Example\u003c/a\u003e\n\u003c/strong\u003e\u003c/li\u003e\n\u003cli\u003eFred Fiand \u0026amp; Michael Bussieck; \u003cstrong\u003e\u003ca href=\"informs2017_HPC_with_GAMS.pdf\"\u003eHigh Performance Computing with GAMS\u003c/a\u003e\n\u003c/strong\u003e\n\u003cbr\u003e\u003cbr\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003cli\u003eInternational Conference on Operations Research, September 06-08, 2017, Berlin, Germany\u003c/li\u003e\n\u003c/ul\u003e\n\u003cul\u003e\n\u003cli\u003eFrederik Proske, Robin Schuchmann; \u003cstrong\u003e\u003ca href=\"OR2017_USMA_TEE_PRES.pdf\"\u003eExam scheduling at United States Military Academy West Point\u003c/a\u003e\n\u003c/strong\u003e\u003c/li\u003e\n\u003cli\u003eLutz Westermann; \u003cstrong\u003e\u003ca href=\"OR2017_Berlin_Talk_LW.pdf\"\u003eEmbedded Code in GAMS – Using Python as an Example\u003c/a\u003e\n\u003c/strong\u003e\u003c/li\u003e\n\u003cli\u003eFred Fiand, Michael Bussieck; \u003cstrong\u003e\u003ca href=\"OR2017_Berlin_Talk_FF.pdf\"\u003eHigh Performance Computing with GAMS\u003c/a\u003e\n\u003c/strong\u003e\u003c/li\u003e\n\u003cli\u003eFranz Nelissen; \u003cstrong\u003e\u003ca href=\"OR2017_Berlin_Talk_FN.pdf\"\u003eA distributed Optimization Bot/Agent Application Framework for GAMS Models\u003c/a\u003e\n\u003c/strong\u003e\u003c/li\u003e\n\u003cli\u003eFred Fiand, Franz Nelissen, Lutz Westermann; \u003ca href=\"OR2017_Berlin_WS.pdf\"\u003ePre-Conference Workshops\u003c/a\u003e\n\n\u003cul\u003e\n\u003cli\u003ePart I: An Introduction to GAMS\u003c/li\u003e\n\u003cli\u003ePart II: Stochastic programming in GAMS\u003c/li\u003e\n\u003cli\u003ePart III: The GAMS Object-Oriented API\u0026rsquo;s\u003c/li\u003e\n\u003cli\u003ePart IV: Code embedding in GAMS\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"OR2017_examples_WS.zip\"\u003eexamples.zip\u003c/a\u003e\n\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003e\u003ca name=\"OR 2016\"\u003e\u003c/a\u003e\u003c/p\u003e","ref":"/archives/presentations/","title":"GAMS Presentations"},{"body":"Academic GAMS Specialists ","excerpt":"\u003ch2 id=\"academic-gams-specialists\"\u003eAcademic GAMS Specialists\u003c/h2\u003e","ref":"/specialists/","title":"GAMS Specialists"},{"body":"GAMS users worldwide use a mailing list named GAMS-L to exchange information about GAMS. There are two possible ways to subscribe:\nUse a web interface to join and leave the list and to browse the list archive .\nSend a message of the form:\nSubject:is ignored subscribe GAMS-L your_first_name your_last_name info help ? to listserv@listserv.dfn.de , where the info and help lines are optional but useful. Once you have successfully subscribed, you can send messages to all GAMS-L subscribers by sending your mail to GAMS-L@listserv.dfn.de .\nIf you want to sign off (unsubscribe) from the list, send a message with a body of the form:\nsignoff GAMS-L to listserv@listserv.dfn.de or use the web interface.\nAll the messages which have been sent to the GAMS-Mailing list are archived:\nThe old archive (600k) contains a selection of the questions and answers, which have been submitted to the GAMS mailing list until Spring 1998. Starting from January, 1998 the new archive is directly generated and maintained by the list server program. Note: The GAMS-L is completely independent from any official GAMS technical support and GAMS does not take any responsibility for the given answers.\n","excerpt":"\u003cp\u003eGAMS users worldwide use a mailing list named GAMS-L to exchange information about GAMS. There are two possible ways to subscribe:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003e\n\u003cp\u003eUse a \u003ca href=\"https://www.listserv.dfn.de/sympa/subscribe/gams-l\" target=\"_blank\"\u003eweb interface\u003c/a\u003e\n to join and leave the list and to \u003ca href=\"https://www.listserv.dfn.de/sympa/arc/gams-l\" target=\"_blank\"\u003ebrowse the list archive\u003c/a\u003e\n.\u003c/p\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003cp\u003eSend a message of the form:\u003c/p\u003e\n\u003c/li\u003e\n\u003c/ol\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eSubject:is ignored\n subscribe GAMS-L your_first_name your_last_name\n info\n help ?\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eto \u003ca href=\"mailto:listserv@listserv.dfn.de\"\u003elistserv@listserv.dfn.de\u003c/a\u003e\n, where the info and help lines are optional but useful. Once you have successfully subscribed, you can send messages to all GAMS-L subscribers by sending your mail to \u003ca href=\"mailto:GAMS-L@listserv.dfn.de\"\u003eGAMS-L@listserv.dfn.de\u003c/a\u003e\n.\u003c/p\u003e","ref":"/maillist/","title":"GAMS-L"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/ggarg/","title":"Girish"},{"body":"Impressum Angaben gemäß § 5 TMG: GAMS Software GmbH Augustinusstr. 11b 50226 Frechen Deutschland\nVertreten durch: Dr. Michael Bussieck,\nDr. Franz Nelißen\nKontakt: Telefon: 0221 949-9170 E-Mail: sales@gams.com Registereintrag: Eintragung im Handelsregister. Registergericht: Amtsgericht Köln Registernummer: HRB 32878\nUmsatzsteuer: Umsatzsteuer-Identifikationsnummer gemäß §27 a Umsatzsteuergesetz: DE811975677\nStreitschlichtung Die Europäische Kommission stellt eine Plattform zur Online-Streitbeilegung (OS) bereit: https://ec.europa.eu/consumers/odr . Unsere E-Mail-Adresse finden Sie oben im Impressum.\nWir sind nicht bereit oder verpflichtet, an Streitbeilegungsverfahren vor einer Verbraucherschlichtungsstelle teilzunehmen.\nHaftung für Links Unser Angebot enthält Links zu externen Webseiten Dritter, auf deren Inhalte wir keinen Einfluss haben. Deshalb können wir für diese fremden Inhalte auch keine Gewähr übernehmen. Für die Inhalte der verlinkten Seiten ist stets der jeweilige Anbieter oder Betreiber der Seiten verantwortlich. Die verlinkten Seiten wurden zum Zeitpunkt der Verlinkung auf mögliche Rechtsverstöße überprüft. Rechtswidrige Inhalte waren zum Zeitpunkt der Verlinkung nicht erkennbar.\nEine permanente inhaltliche Kontrolle der verlinkten Seiten ist jedoch ohne konkrete Anhaltspunkte einer Rechtsverletzung nicht zumutbar. Bei Bekanntwerden von Rechtsverletzungen werden wir derartige Links umgehend entfernen.\nSite Notice Information provided according to Sec. 5 German Telemedia Act (TMG): GAMS Software GmbH Augustinusstr. 11b 50226 Frechen Germany\nRepresented by: Dr. Michael Bussieck,\nDr. Franz Nelißen\nContact: Telephone: +49 221 949-9170 Email: sales@gams.com Register entry: Entry in the Handelsregister. Registering court:Amtsgericht Köln Registration number: HRB 32878\nVAT: VAT Id number according to Sec. 27 a German Value Added Tax Act: DE811975677\nDispute resolution The European Commission provides a platform for online dispute resolution (OS): https://ec.europa.eu/consumers/odr . Please find our email in the impressum/legal notice.\nWe do not take part in online dispute resolutions at consumer arbitration boards.\nLinks on external Websites Contents of external websites on which we are linking direct or indirect (through „hyperlinks“ or „deeplinks“) are beyond our responsibility and are not adopted as our own content. When the links were published, we didn’t have knowledge of any illegal activities or contents on these websites. Since we do not have any control on the contents of these websites, we distance ourselves from all contents of all linked websites, which were updated after the setting of the links. For all contents and especially damages, resulting of the use of the linked websites, only the provider of these linked websites can be held liable. If we receive knowledge of illegal contents on these linked websites, we will delete the according links.\n","excerpt":"\u003ch2 id=\"impressum\"\u003eImpressum\u003c/h2\u003e\n\u003ch3 id=\"angaben-gemäß--5-tmg\"\u003eAngaben gemäß § 5 TMG:\u003c/h3\u003e\n\u003cp\u003eGAMS Software GmbH\nAugustinusstr. 11b\n50226 Frechen\nDeutschland\u003c/p\u003e\n\u003ch3 id=\"vertreten-durch\"\u003eVertreten durch:\u003c/h3\u003e\n\u003cp\u003eDr. Michael Bussieck,\u003cbr\u003e\nDr. Franz Nelißen\u003c/p\u003e\n\u003ch3 id=\"kontakt\"\u003eKontakt:\u003c/h3\u003e\n\u003cp\u003eTelefon: 0221 949-9170\nE-Mail: \u003ca href=\"mailto:sales@gams.com\"\u003esales@gams.com\u003c/a\u003e\n\u003c/p\u003e\n\u003ch3 id=\"registereintrag\"\u003eRegistereintrag:\u003c/h3\u003e\n\u003cp\u003eEintragung im Handelsregister.\nRegistergericht: Amtsgericht Köln\nRegisternummer: HRB 32878\u003c/p\u003e\n\u003ch3 id=\"umsatzsteuer\"\u003eUmsatzsteuer:\u003c/h3\u003e\n\u003cp\u003eUmsatzsteuer-Identifikationsnummer gemäß §27 a Umsatzsteuergesetz:\nDE811975677\u003c/p\u003e\n\u003ch3 id=\"streitschlichtung\"\u003eStreitschlichtung\u003c/h3\u003e\n\u003cp\u003eDie Europäische Kommission stellt eine Plattform zur Online-Streitbeilegung (OS) bereit: \u003ca href=\"https://ec.europa.eu/consumers/odr\" target=\"_blank\"\u003ehttps://ec.europa.eu/consumers/odr\u003c/a\u003e\n.\nUnsere E-Mail-Adresse finden Sie oben im Impressum.\u003c/p\u003e","ref":"/impressum/","title":"Impressum / Legal"},{"body":"Our Commitment to Information Security At GAMS, we prioritize the security of our information and the trust of our clients and partners. In recognition of our stringent security practices, we are proud to have acquired the certification of GAMS Software GmbH under ISO 27001:2022 in the scope of\nDevelopment and operation of algebraic modeling systems as well as consulting and technical support for algebraic modeling systems.\nISO27001 certificate (April 10, 2025) Verify our certification Our Information Security Management System (ISMS) Our commitment to information security is embodied by our comprehensive Information Security Management System (ISMS), established in accordance with ISO 27001:2022 standards. This framework ensures that we consistently identify, evaluate, and address security risks, safeguarding sensitive information against threats.\nKey components of this system are: Employee Training We believe that a well-informed team is our first line of defense in maintaining security standards. All employees at GAMS undergo regular training on our security protocols and best practices to ensure everyone is equipped to protect our systems and data.\nRegular Audits To maintain and improve our security standards, we conduct regular audits of our systems and processes. These audits are integral to our continuous improvement efforts, ensuring compliance with our security policies and the latest industry standards.\nCore Tools and Technologies We utilize a suite of state of the art tools and technologies to manage and protect our information, including:\nGitLab: For secure code management and collaboration, including continuous integration for automated static and dynamic application security testing\nVaultwarden: An open-source password management solution that helps us securely store and manage access credentials.\nEmail Services: Hosted by Google Workspace with enforced MFA, ensuring secure and reliable communication.\nCloud Infrastructure Management: We use AWS infrastructure to deploy and operate scalable and secure applications across multiple availability zones. All AWS accounts are fortified using hardware-based MFA (YubiKey), providing an additional layer of security against unauthorized access.\nAt GAMS, we are committed to maintaining the highest standards of information security.\nMore information about security aspects of our Engine SaaS cloud service is available as part of our documentation .\n","excerpt":"\u003ch1 id=\"our-commitment-to-information-security\"\u003eOur Commitment to Information Security\u003c/h1\u003e\n\u003cp\u003eAt GAMS, we prioritize the security of our information and the trust of our clients and partners. In recognition of our stringent security practices, we are proud to have acquired the certification of GAMS Software GmbH under ISO 27001:2022 in the scope of\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eDevelopment and operation of algebraic modeling systems as well as consulting and technical support for algebraic modeling systems\u003c/strong\u003e.\u003c/p\u003e","ref":"/about/infosec/","title":"Information Security"},{"body":"Students: Be decisive about your future… join GAMS At GAMS, you get the chance to work with and learn from some of the brightest minds in the mathematical modeling industry.\nFor advanced students about to finish their education, we offer exciting full-time or part-time internships and thesis projects. We provide a challenging, dynamic, and innovative work environment where freedom and responsibility go hand in hand.\nEvery year positions are made available to students from all kinds of backgrounds, strengths, and interests.\nAn internship, graduation assignment, or student job allows you to familiarize yourself with our technology, our business, and the current issues driving and shaping the market of modeling technology. Students at GAMS learn first hand how we solve the technical challenges of building cutting-edge technology and get a taste of what it is like to work in a multidisciplinary project responsible for gathering, testing, and delivering the solution.\nIf you are currently a MSc student and you are interested in a challenging and rewarding internship, graduation assignment, or student job with GAMS, please send an email to internships@gams.com with a link to your LinkedIn profile (or another conclusive social media or online profile).\n","excerpt":"\u003ch2 id=\"students-be-decisive-about-your-future-join-gams\"\u003eStudents: Be decisive about your future… join GAMS\u003c/h2\u003e\n\u003cp\u003eAt GAMS, you get the chance to work with and learn from some of the brightest minds in the mathematical modeling industry.\u003c/p\u003e\n\u003cp\u003eFor advanced students about to finish their education, we offer exciting full-time or part-time internships and thesis projects. We provide a challenging, dynamic, and innovative work environment where freedom and responsibility go hand in hand.\u003c/p\u003e","ref":"/about/internships/","title":"Internships at GAMS"},{"body":"","excerpt":"","ref":"/team/jhasselbring/","title":"Janina"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/jparnjai/","title":"Jeed"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/jmonki/","title":"Jesko"},{"body":"Job Openings Currently there are no open positions at GAMS.\n","excerpt":"\u003ch2 id=\"job-openings\"\u003eJob Openings\u003c/h2\u003e\n\u003cp\u003eCurrently there are no open positions at GAMS.\u003c/p\u003e","ref":"/about/openings/","title":"Job openings"},{"body":"","excerpt":"","ref":"/team/jbroihan/","title":"Justine"},{"body":"","excerpt":"","ref":"/team/kbestuzheva/","title":"Ksenia"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/lsantos/","title":"Lleny"},{"body":"","excerpt":"","ref":"/team/lrandolph/","title":"Logan"},{"body":"","excerpt":"","ref":"/specialization/lp/","title":"LP"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/lwestermann/","title":"Lutz"},{"body":"Manisha Ukwatte Hobbies ","excerpt":"\u003ch1 id=\"manisha-ukwatte\"\u003eManisha Ukwatte\u003c/h1\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/mukwatte/","title":"Manisha"},{"body":"Marius Bützler Marius joined GAMS Software GmbH in August 2018. Since 2024 he takes care internal projects related to our GAMS website and he takes care of our internal AI-projects and prototype development to support internal processes for the sales and marketing area. In 2019, Marius also started the Bachelor\u0026rsquo;s degree in Business Information Management in Cologne that he finished in 2023.\n","excerpt":"\u003ch1 id=\"marius-bützler\"\u003eMarius Bützler\u003c/h1\u003e\n\u003cdiv class =”container”\u003e\n \u003cdiv class=\"row align-items-center\"\u003e\n \n\n\n \u003cdiv class=\"col-md-9\"\u003e\n\n \u003cp\u003eMarius joined GAMS Software GmbH in August 2018. Since 2024 he takes care internal projects related to our GAMS website and he takes care of our internal AI-projects and prototype development to support internal processes for the sales and marketing area.\nIn 2019, Marius also started the Bachelor\u0026rsquo;s degree in Business Information Management in Cologne that he finished in 2023.\u003c/p\u003e","ref":"/team/mbuetzler/","title":"Marius"},{"body":"","excerpt":"","ref":"/team/mgallia/","title":"Mateo"},{"body":"Maurice Jansen Hobbies ","excerpt":"\u003ch1 id=\"maurice-jansen\"\u003eMaurice Jansen\u003c/h1\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/mjansen/","title":"Maurice"},{"body":"The Bruce McCarl Newsletter Archive Newsletter Number Publish Date Download Link 50 Mar 2025 Download PDF 49 Mar 2024 Download PDF 48 May 2023 Download PDF 47 Feb 2022 Download PDF 46 Feb 2021 Download PDF 45 Jun 2020 Download PDF 44 Nov 2019 Download PDF 43 Feb 2019 Download PDF 42 Mar 2018 Download PDF Download Examples 41 Jul 2017 Download PDF Download Examples 40 May 2017 Download PDF Download Examples 39 Jun 2016 Download PDF Download Examples 38 Mar 2016 Download PDF 37 Jul 2015 Download PDF 36 May 2015 Download PDF 35 Jul 2014 Download PDF Download Examples 34 Apr 2014 Download PDF 33 Jul 2013 Download PDF 32 Apr 2013 Download PDF 31 Jun 2012 Download PDF 30 Feb 2011 Download PDF 29 Jul 2010 Download PDF 28 Apr 2010 Download PDF 27 Jun 2009 Download PDF 26 Dec 2008 Download PDF 25 Jul 2008 Download PDF 24 Apr 2008 Download PDF 23 Feb 2008 Download PDF 22 Oct 2007 Download PDF 21 Mar 2007 Download PDF 20 Jul 2006 Download PDF 19 May 2006 Download PDF 18 Jun 2005 Download PDF 17 Nov 2004 Download PDF 16 Oct 2004 Download PDF 15 Feb 2004 Download PDF 14 Dec 2003 Download PDF 13 Sep 2003 Download PDF 12 May 2003 Download PDF 11 Feb 2003 Download PDF 10 Oct 2002 Download PDF 09 Jun 2002 Download PDF 08 Mar 2002 Download PDF 07 Nov 2001 Download PDF 06 Oct 2001 Download PDF 05 Aug 2001 Download PDF 04 Mar 2001 Download PDF 03 Nov 2000 Download PDF 02 Jun 2000 Download PDF 01 Apr 2000 Download PDF ","excerpt":"\u003ch1 id=\"the-bruce-mccarl-newsletter-archive\"\u003eThe Bruce McCarl Newsletter Archive\u003c/h1\u003e\n\u003ctable\u003e\n \u003cthead\u003e\n \u003ctr\u003e\n \u003cth\u003eNewsletter Number\u003c/th\u003e\n \u003cth\u003ePublish Date\u003c/th\u003e\n \u003cth\u003eDownload Link\u003c/th\u003e\n \u003c/tr\u003e\n \u003c/thead\u003e\n \u003ctbody\u003e\n \u003ctr\u003e\n \u003ctd\u003e50\u003c/td\u003e\n \u003ctd\u003eMar 2025\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_50.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e49\u003c/td\u003e\n \u003ctd\u003eMar 2024\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_49.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e48\u003c/td\u003e\n \u003ctd\u003eMay 2023\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_48.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e47\u003c/td\u003e\n \u003ctd\u003eFeb 2022\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_47.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e46\u003c/td\u003e\n \u003ctd\u003eFeb 2021\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_46.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e45\u003c/td\u003e\n \u003ctd\u003eJun 2020\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_45.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e44\u003c/td\u003e\n \u003ctd\u003eNov 2019\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_44.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e43\u003c/td\u003e\n \u003ctd\u003eFeb 2019\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_43.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e42\u003c/td\u003e\n \u003ctd\u003eMar 2018\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_42.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e\u003c/td\u003e\n \u003ctd\u003e\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_42_examples.zip\"\u003eDownload Examples\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e41\u003c/td\u003e\n \u003ctd\u003eJul 2017\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_41.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e\u003c/td\u003e\n \u003ctd\u003e\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_41_example.zip\"\u003eDownload Examples\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e40\u003c/td\u003e\n \u003ctd\u003eMay 2017\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_40.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e\u003c/td\u003e\n \u003ctd\u003e\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_40_examples.zip\"\u003eDownload Examples\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e39\u003c/td\u003e\n \u003ctd\u003eJun 2016\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_39.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e\u003c/td\u003e\n \u003ctd\u003e\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_39_examples.zip\"\u003eDownload Examples\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e38\u003c/td\u003e\n \u003ctd\u003eMar 2016\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_38.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e37\u003c/td\u003e\n \u003ctd\u003eJul 2015\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_37.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e36\u003c/td\u003e\n \u003ctd\u003eMay 2015\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_36.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e35\u003c/td\u003e\n \u003ctd\u003eJul 2014\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_35.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e\u003c/td\u003e\n \u003ctd\u003e\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_35_examples.zip\"\u003eDownload Examples\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e34\u003c/td\u003e\n \u003ctd\u003eApr 2014\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_34.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e33\u003c/td\u003e\n \u003ctd\u003eJul 2013\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_33.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e32\u003c/td\u003e\n \u003ctd\u003eApr 2013\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_32.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e31\u003c/td\u003e\n \u003ctd\u003eJun 2012\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_31.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e30\u003c/td\u003e\n \u003ctd\u003eFeb 2011\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_30.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e29\u003c/td\u003e\n \u003ctd\u003eJul 2010\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_29.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e28\u003c/td\u003e\n \u003ctd\u003eApr 2010\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_28.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e27\u003c/td\u003e\n \u003ctd\u003eJun 2009\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_27.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e26\u003c/td\u003e\n \u003ctd\u003eDec 2008\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_26.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e25\u003c/td\u003e\n \u003ctd\u003eJul 2008\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_25.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e24\u003c/td\u003e\n \u003ctd\u003eApr 2008\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_24.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e23\u003c/td\u003e\n \u003ctd\u003eFeb 2008\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_23.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e22\u003c/td\u003e\n \u003ctd\u003eOct 2007\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_22.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e21\u003c/td\u003e\n \u003ctd\u003eMar 2007\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_21.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e20\u003c/td\u003e\n \u003ctd\u003eJul 2006\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_20.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e19\u003c/td\u003e\n \u003ctd\u003eMay 2006\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_19.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e18\u003c/td\u003e\n \u003ctd\u003eJun 2005\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_18.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e17\u003c/td\u003e\n \u003ctd\u003eNov 2004\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_17.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e16\u003c/td\u003e\n \u003ctd\u003eOct 2004\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_16.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e15\u003c/td\u003e\n \u003ctd\u003eFeb 2004\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_15.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e14\u003c/td\u003e\n \u003ctd\u003eDec 2003\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_14.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e13\u003c/td\u003e\n \u003ctd\u003eSep 2003\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_13.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e12\u003c/td\u003e\n \u003ctd\u003eMay 2003\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_12.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e11\u003c/td\u003e\n \u003ctd\u003eFeb 2003\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_11.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e10\u003c/td\u003e\n \u003ctd\u003eOct 2002\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_10.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e09\u003c/td\u003e\n \u003ctd\u003eJun 2002\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_09.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e08\u003c/td\u003e\n \u003ctd\u003eMar 2002\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_08.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e07\u003c/td\u003e\n \u003ctd\u003eNov 2001\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_07.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e06\u003c/td\u003e\n \u003ctd\u003eOct 2001\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_06.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e05\u003c/td\u003e\n \u003ctd\u003eAug 2001\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_05.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e04\u003c/td\u003e\n \u003ctd\u003eMar 2001\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_04.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e03\u003c/td\u003e\n \u003ctd\u003eNov 2000\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_03.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e02\u003c/td\u003e\n \u003ctd\u003eJun 2000\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_02.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003e01\u003c/td\u003e\n \u003ctd\u003eApr 2000\u003c/td\u003e\n \u003ctd\u003e\u003ca href=\"archive/mccarl_newsletter_no_01.pdf\"\u003eDownload PDF\u003c/a\u003e\n\u003c/td\u003e\n \u003c/tr\u003e\n \u003c/tbody\u003e\n\u003c/table\u003e","ref":"/newsletter/mccarl/","title":"McCarl Newsletter Archive"},{"body":"","excerpt":"","ref":"/team/mdemirci/","title":"Merve"},{"body":"Summary of Qualifications Dr. Michael Bussieck has a PhD in Mathematics from Technical University of Braunschweig, Germany. He worked from 1998 to 2004 at GAMS Development in Washington DC, USA as a senior optimization analyst. Since 2004 he has also been a managing partner of GAMS Software GmbH, leading the GAMS software development group. In addition to his development responsibilities, he frequently engages in customer optimization projects that deliver cutting-edge optimization technology to clients from industry (energy, automotive, and chemical), the military, and government throughout Europe and the US.\nMichael has published in many first class optimization journals, frequently gives lectures at international conferences, and leads the academic outreach program at GAMS. He also serves as a member in the advisory board of the German OR Society.\nProfessional Profile 2004 – today Managing Director, GAMS Software GmbH, Braunschweig, Germany 1997 – 2004 Senior Engineer, GAMS Development Corp., Washington D.C., USA 1992 – 1997 Scientific staff, Mathematical Optimization, Technical University Braunschweig, Germany Academic Degrees Dr. rer. nat. Technical University Braunschweig, Germany, 1997. Dipl. math. Technical University Braunschweig, Germany, 1992. Publications Articles in refereed Journals and Books Optimizing Large-Scale Linear Energy System Problems with Block Diagonal Structure by Using Parallel Interior-Point Methods, joint work with Thomas Breuer, Karl-Kien Cao, Felix Cebulla, Frederik Fiand, Hans Christian Gils, Ambros Gleixner, Dmitry Khabi, Thorsten Koch, Daniel Rehfeldt, and Manuel Wetzel. Submitted for publication, 2017. High Performance Prototyping of Decomposition Methods in GAMS, joint work with Timo Lohmann, Steffen Rebennack, and Lutz Westermann. INFORMS Journal on Computing Volume: 33, Number: 1 (Winter 2021): 34-50. PAVER 2.0: An Open Source Environment for Automated Performance Analysis of Benchmarking Data, joint work with Steven P. Dirkse and Stefan Vigerske. Journal of Global Optimization, Volume 59, Issue 2-3, pp 259-275, 2013. GUSS: Solving Collections of Data Related Models within GAMS, joint work with Michael C. Ferris and Timo Lohmann. In Algebraic Modeling Systems – Modeling and Solving Real World Optimization Problems, J.Kallrath (Ed.), Springer, 2011 Combining QCR and CHR for Convex Quadratic Pure 0-1 Programming Problems with Linear Constraints, joint work with Aykut Ahlatçıoğlu, Mustafa Esen, Monique Guignard, Jan-Hendrik Jagla, and Alexander Meeraus. Annals of Operations Research, Volume 199, Issue 1, Page 33-49, 2012 MINLP Solver Software, joint work with Stefan Vigerske. Wiley Encyclopedia of Operations Research and Management Science, 2011 Term-End Exam Scheduling at the United States Military Academy at West Point, joint work with Alex Meeraus, Monique Guignard, Fred O\u0026rsquo;Brien, Siqun Wang. Special issue Application and Methodologies for Planning and Scheduling in Journal of Scheduling, Vol. 13, No. 4, 375-391, 2010 An Experimental Study of GAMS/AlphaECP MINLP Solver, joint work with Toni Lastusilta and Tapio Westerlund, Abo Akademi University, Finland. Ind. Eng. Chem. Res., 2009, 48 (15), pp 7337–7345 Grid Enabled Optimization with GAMS, joint work with M.C. Ferris and A. Meeraus. INFORMS Journal on Computing, Vol. 21, No. 3, 349-362, 2009 Dynamic Filters and Randomized Drivers for the Multi-start Global Optimization Algorithm MSNLP, joint work with Z. Ugray, L. Lasdon, J.C. Plummer. In Optimization Methods and Software, Vol. 24, No 4-5, 635-656, 2009 Algebraic Modeling for IP and MIP (GAMS), joint work with A.Meeraus. In Annals of Operations Research 149(1): History of Integer Programming: Distinguished Personal Notes and Reminiscences, Guest Editors: Kurt Spielberg and Monique Guignard-Spielberg, February, 2007, pp. 49-56 Software Quality Assurance for Mathematical Modeling Systems, joint work with S.Dirkse, A.Meeraus and A.Pruessner. In The Next Wave in Computing, Optimization, and Decision Technologies, Proceedings of Ninth INFORMS Computing Society Conference, B. Golden, S. Raghavan, and E. Wasil (editors), January, 2005. A fast algorithm for near optimal line plans, joint work with T.Lindner and M.E.Lübbecke. In Math. Methods Oper. Res., 59(3), 2004. General Algebraic Modeling System (GAMS), joint work with A. Meeraus. In Modeling Languages in Mathematical Optimization, J.Kallrath (Ed.), Kluwer Academic Publishers, Norwell, MA, 2004, pp.137-157 Quality Assurance and Global Optimization, joint work with A.S. Drud, A. Meeraus, and A. Pruessner. In Lecture Notes in Computer Science 2861/2003. Title: Global Optimization and Constraint Satisfaction: First International Workshop on Global Constraint Optimization and Constraint Satisfaction, COCOS 2002 Valbonne-Sophia Antipolis, France, October 2-4, 2002. Springer Verlag, Heidelberg, 2003, pp. 223-238 Scheduling Commercial Videotapes in Broadcast Television, joint work with S. Bollapragada and S. Mallik. In Operations Reseach, Vol. 52, No. 5, pp. 679-689, 2004 Mixed-Integer Nonlinear Programming, joint work with A. Pruessner. SIAG/OPT Newsletter: Views \u0026amp; News, 2003 MINLPLib - A Collection of Test Models for Mixed-Integer Nonlinear Programming, joint work with A.S. Drud and A. Meeraus. In Informs J. Comput. 15(1), 2003 Scheduling Trams in the Morning is Hard, joint work with U.Blasum, W.Hochstättler, C.Moll, H.Scheel, T.Winter. In Mathematical Methods of OR 49(1), pp.137-148, 1999. Optimal Scrap Combination for Steel Production, joint work with K.-P. Bernatzki, T.Lindner, and M.E.Lübbecke. In OR Spektrum, 20(4), pp.251-258, 1998. The Vertex Set of a 0/1-Polytope is Strongly P-Enumerable, joint work with M.E.Lübbecke. In Computational Geometry: Theory and Applications 11(2), pp.103-109, 1998. Discrete Optimization in Public Rail Transport, joint work with T.Winter, U.T.Zimmermann. In Mathematical Programming 79(3), pp.415-444, 1997 Linienoptimierung - Modellierung und praktischer Einsatz (in german), joint work with Matthias Krista, Klaus-D. Wiegand, and U.Zimmermann. In Mathematik - Schlüsseltechnologie für die Zukunft, Springer, pp. 595-607, 1997. Optimal Lines for Railway Systems, joint work with P.Kreuzer and U.Zimmermann. In European J. Oper. Res. 96, pp. 54-63, 1996. On Balanced Edge Connectivity and Applications to some Bottleneck Augmentation Problems in Networks. In Z. Oper. Res. 43(2), pp. 182-194, 1996. Fast Algorithms for the Maximum Convolution Problem, joint work with H.Hassler, G.J.Woeginger, and U.Zimmermann published in Oper. Res. Lett. 15, pp. 133-141, 1994. Other publications Comparison of Some High-Performance MINLP Solvers, joint work with Toni Lastusilta and Tapio Westerlund, Abo Akademi University, Finland. Extended Abstract, The Eight International Conference on Chemical \u0026amp; Process Engineering, 2007 Praxisbeispiel zur Optimierung eines Reparaturnetzwerks, joint work with Georg Dietrich and Franz Triebenbacher (Barkawi \u0026amp; Partner). Proceedings of 27. Symposium: Logistik und Mathematik Bad Honnef, 17./18. November 2005. Modeling with Quadratic Constraints to Improve Exam Timetabling Solutions, joint work with Monique Guignard, Siqun Wang. OPIM Working Paper, The Wharton School, University of Pennsylvania, No 01-07-02. Modeling Language Report, joint work with Steve Dirkse, Robert Fourer, Alex Meeraus, James Tebboth, Pierre Trudeau, Mark Wiley, David L. Woodruff published in Newsletter of the INFORMS Computing Society, Vol 23, No 1, Spring 2002. Optimal Lines in Public Rail Transport, Ph.D. thesis Discrete Optimization in Rail Transport - An extended abstract, joint work with M.E.Lübbecke, T.Winter, and U.T.Zimmermann In V. Bulatov, editor, Proceedings of 11th Baikal International School-Seminar on Optimization Methods and their Applications, pages 225-234, Irkutsk, Baikal, July 1998. Schlußbericht: Optimale Linienführung und Routenplanung in Verkehrssystemen (Schienenverkehr), (german) joint work with U.T.Zimmermann, Verfahren zum Kantenzusammenhang in gerichteten Graphen, diploma thesis ","excerpt":"\u003ch2 id=\"summary-of-qualifications\"\u003eSummary of Qualifications\u003c/h2\u003e\n\u003cp\u003eDr. Michael Bussieck has a PhD in Mathematics from Technical University of Braunschweig, Germany. He worked from 1998 to 2004 at GAMS Development in Washington DC, USA as a senior optimization analyst. Since 2004 he has also been a managing partner of GAMS Software GmbH, leading the GAMS software development group. In addition to his development responsibilities, he frequently engages in customer optimization projects that deliver cutting-edge optimization technology to clients from industry (energy, automotive, and chemical), the military, and government throughout Europe and the US.\u003c/p\u003e","ref":"/team/mbussieck/","title":"Michael"},{"body":"","excerpt":"","ref":"/team/mhorstmann/","title":"Michael"},{"body":"Model definition with the CONOPT Subroutine Library The user of the CONOPT Subroutine Library must define the model via a set of subroutines. The following short description is intended to give you an impression of the complexity of building models for the CONOPT Subroutine Library. You should compare this with the work involved in developing and debugging a model via a modeling system.\nAt model setup time, the model defining subroutines asks for detailed information about the model. This information includes:\nSeveral statistics describing the size of the model, Information on initial values and bounds for all variables, Right hand sides and types of all constraints, The pattern of the nonzero elements of Jacobian matrix defined in a particular sparse format (sorted by column), A classification of each Jacobian element as either constant or variable and numerical values of the constant elements, Optional pattern of the Hessian of the Lagrangian defined in sparse format, and Optional basis information, initial function values, and derivative values. During the optimization, CONOPT asks for numerical values of the nonlinear functions and their derivatives by calling some user supplied routines. The modeler must code both the nonlinear expressions and their derivatives; there is no option for numerical derivatives. The nonlinear functions must be smooth to a high accuracy and their derivatives must be consistent with the function values to a high accuracy. If these requirements are not satisfied CONOPT may converge slowly or it may not converge at all. Models in which the nonlinear functions are based on solving sets of nonlinear equations or partial differential equations must therefore solve these sub-models to a very high accuracy. Models with interpolations between table lookups must combine these inherently non-smooth functions with some kind of smoothing such as splines. Derivatives computed from numerical differences are often not accurate enough, especially if there is some noise in the function values.\nOptional user supplied subroutine allow the modeler to specify 2nd derivatives, either as the sparse Hessian of the Lagrangian (the matrix H), and/or as the directional 2nd derivative (the matrix-vector product H*v). CONOPT is only efficient on large models with many degrees of freedom if at least one of these optional routines is supplied by the modeler.\nOther optional subroutines allow the modeler to control tolerances and algorithmic features, to display tailored progress information and tailored solution reports, and to stop based on user interrupts.\nYou should not attempt to use the CONOPT Subroutine Library unless you have some familiarity with sparse matrices, in particular packing formats for sparse matrices. You should also have smooth functions and you should be able to derive the analytic form for the necessary derivatives. For larger models you should be able to derive and code the 2nd derivatives.\n","excerpt":"\u003ch1 id=\"model-definition-with-the-conopt-subroutine-library\"\u003eModel definition with the CONOPT Subroutine Library\u003c/h1\u003e\n\u003cp\u003eThe user of the CONOPT Subroutine Library must define the model via a set of subroutines. The following short description is intended to give you an impression of the complexity of building models for the CONOPT Subroutine Library. You should compare this with the work involved in developing and debugging a model via a modeling system.\u003c/p\u003e","ref":"/products/conopt/modeldefinition/","title":"Model definition with the CONOPT Subroutine Library"},{"body":"","excerpt":"","ref":"/team/msoyturk/","title":"Muhammet"},{"body":" Newsletters Fill out the form below to subscribe to our newsletters. You can choose between two flavours:\nGeneral Information This newsletter contains information about GAMS products, customer case studies, and highlights from our blog.\nBruce McCarl's GAMS Newsletter In this newsletter, long term GAMS expert and Nobel laureate Prof Bruce McCarl writes about new and noteworthy GAMS features. Bruce is very honest in his views and regularly highlights problems, bugs, or annoyances he finds with GAMS. Many of his suggestions make it into future GAMS releases.\nAn archive of Bruce's old newsletters is available here.\nYour Information * indicates required Email Address * We'll never share your email with anyone else. First Name Last Name Company Country Please select the GAMS Newsletters you would like to receive General Information Bruce McCarl's GAMS Newsletter If you like, you can read our previous newsletters.\nYou can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.\nWe use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.\n","excerpt":"\u003cdiv class=\"full-width\"\u003e\n\t\u003cdiv class=\"jumbotron jumbotron-fluid\"\u003e\n\t\t\n\t\t\u003csection\u003e\n\t\t\t\n\t\t\t\u003ch1 class='display-4'\u003eNewsletters\u003c/h1\u003e\n\t\t\t\n\t\t\t\u003cp\u003eFill out the form below to subscribe to our newsletters. You can choose between two flavours:\u003c/p\u003e\n\t\t\t\n\t\t\t\u003cdiv class=\"card\"\u003e\n\t\t\t\t\u003cdiv class=\"card-body\"\u003e\n\t\t\t\t\t\u003ch3 class=\"card-title\"\u003eGeneral Information\u003c/h3\u003e\n\t\t\t\t\t\u003cp class=\"card-text\"\u003eThis newsletter contains information about GAMS products, customer case studies, and\n\t\t\t\t\t\thighlights from our blog.\u003c/p\u003e\n\t\t\t\t\u003c/div\u003e\n\t\t\t\u003c/div\u003e\n\t\t\t\t\n\t\t\t\u003cdiv class=\"card mt-3\"\u003e\n\t\t\t\t\u003cdiv class=\"card-body\"\u003e\n\t\t\t\t\t\u003ch3 class=\"card-title\"\u003eBruce McCarl's GAMS Newsletter\u003c/h3\u003e\n\t\t\t\t\t\u003cp class=\"card-text\"\u003eIn this newsletter, long term GAMS expert and Nobel laureate Prof Bruce McCarl writes about new and noteworthy GAMS\n\t\t\t\t\t\tfeatures. Bruce is very honest in his views and regularly highlights problems, bugs, or annoyances he finds with GAMS.\n\t\t\t\t\t\tMany of his suggestions make it into future GAMS releases.\u003c/p\u003e","ref":"/newsletter/signup/","title":"Newsletter"},{"body":"","excerpt":"","ref":"/newsletter/","title":"Newsletters"},{"body":"","excerpt":"","ref":"/specialization/nlp/","title":"NLP"},{"body":"Payment Options for GAMS Development Customers 1) Electronic Funds Transfer (Wire) Vendor name: GAMS Development Corporation\nAddress: GAMS Development Corporation 2750 Prosperity Ave Suite 500 Fairfax VA 22031 USA\nPhone: (202) 342-0180 General Office\nEmail: accounting@gams.com\nBank Account Information Vendor Account number: 256061759\nBank Name: Branch Banking and Trust Company\nBank Address: 200 W 2nd St. Winston-Salem, NC 27101\nBranch Phone: (703) 847-4350\nFor Domestic wire transfers, use routing number: 051404260.\nFor Foreign wire transfers, use SWIFT code: BRBTUS33\nAlways include the INVOICE number, DC number, OR QUOTE number\n2) Credit Card – American Express, VISA, or Master Card Please contact sales@gams.com or call (202) 342-0180 to pay by credit card. We will need your Invoice number, DC number, or quote number. We will then send you a secure link by email where you can enter your credit card information and make the charge.\n3) Check Make check payable to GAMS Development Corp. Check must be payable in US dollars Identify the GAMS Invoice number, DC number, or Quote number on check Mail check to:\nGAMS Development Corporation 2750 Prosperity Ave Suite 500 Fairfax VA 22031 USA\n","excerpt":"\u003ch1 id=\"payment-options-for-gams-development-customers\"\u003ePayment Options for GAMS Development Customers\u003c/h1\u003e\n\u003ch2 id=\"1-electronic-funds-transfer-wire\"\u003e1) Electronic Funds Transfer (Wire)\u003c/h2\u003e\n\u003cp\u003e\u003cstrong\u003eVendor name:\u003c/strong\u003e\nGAMS Development Corporation\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eAddress:\u003c/strong\u003e\nGAMS Development Corporation\n2750 Prosperity Ave\nSuite 500\nFairfax VA 22031\nUSA\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003ePhone:\u003c/strong\u003e\n(202) 342-0180 \tGeneral Office\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eEmail:\u003c/strong\u003e \taccounting@gams.com\u003c/p\u003e\n\u003ch3 id=\"bank-account-information\"\u003eBank Account Information\u003c/h3\u003e\n\u003cp\u003e\u003cstrong\u003eVendor Account number:\u003c/strong\u003e 256061759\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eBank Name:\u003c/strong\u003e Branch Banking and Trust Company\u003c/p\u003e","ref":"/sales/payment/","title":"Payment Information"},{"body":"Privacy Policy May 2025 Contact Information Responsible party GAMS Software GmbH\nAugustinusstrasse 11b\n50226 Frechen\nFor our organization we have appointed an external data privacy officer with the following contact details:\nDatenschutzberater.NRW GmbH\nDennis Manz\nHansaring 78\n50670 Köln\nTelefon: +49 (0) 221 29 27 29 0\nE-Mail: datenschutz@datenschutzberater.nrw Which data do we collect and process? Contractual data We collect, process, and store your personal data when you request a quote, or place an order with GAMS. Furthermore we store and process data related to the status of your order and to payment processing.\nCorespondence data We store and process emails, fax, or postal notices you send us or we send you. Furthermore, we use Google Forms to ask you for information in certain instances, e.g. when signing up for an event organised by GAMS, or when requesting certain software.\nNewsletters and product-related communication We use the services of MailChimp to send newsletters and product-related communications. This service is provided by Rocket Science Group LLC, 675 Ponce De Leon Ave NE, Suite 5000, Atlanta, GA 30308, USA. MailChimp is a service which organize, analyzes and distributes email communication.\nNewsletter distribution If you provide data (e.g. your email address) to subscribe to our newsletter, it will be stored on MailChimp servers in the USA.\nWe use MailChimp to analyze our newsletter campaigns. When you open an email sent by MailChimp, a file included in the email (called a web beacon) connects to MailChimp’s servers in the United States. This allows us to determine if a newsletter message has been opened and which links you click on. In addition, technical information is collected (e.g. time of retrieval, IP address, browser type, and operating system). This information is used exclusively for the statistical analysis of our newsletter campaigns. The results of these analyses can be used to better tailor future newsletters to your interests.\nData processing for newsletters is based on your consent (Art. 6 (1) (a) DSGVO). You may revoke your consent at any time by unsubscribing from the newsletter. For this purpose, we provide a link in every newsletter we send. You can also unsubscribe from the newsletter directly on the website. The data processed before we receive your request may still be legally processed. The data provided when registering for the newsletter will be used to distribute the newsletter until you cancel your subscription, at which point said data will be deleted from our servers and those of MailChimp.\nProduct-Related Follow-Up Emails If you sign up for an account in our user portal, or purchase a software license from us, we may also use MailChimp to send you product-related follow-up emails. These emails are intended to provide you with important information about the services you have signed up for, help you get started, or inform you about relevant product updates and features. The data processed for this purpose includes the email address you provided during sign-up in our user portal and may include information about your interactions with our services to help us tailor these communications. This data will also be stored on MailChimp servers in the USA. The legal basis for sending these product-related follow-up emails is our legitimate interest in providing a comprehensive user experience and informing you about our services for which you have registered (Art. 6 (1) (f) DSGVO). When you open a product-related follow-up email sent by MailChimp, similar to newsletters, a web beacon may connect to MailChimp’s servers, allowing us to determine if the message has been opened and which links you click on. Technical information (e.g., time of retrieval, IP address, browser type, and operating system) may also be collected. This information helps us understand user engagement with these communications and improve their relevance. If you do not wish to receive these product-related follow-up emails, you can opt-out at any time. We provide an option to unsubscribe from these specific communications in every such email.\nGeneral Information Regarding MailChimp Usage MailChimp is certified under the EU-U.S. Data Privacy Framework.\nThis provides a guarantee to comply with European data protection standards when data is processed in the USA. For details, see the MailChimp data processing addendum at https://mailchimp.com/legal/data-processing-addendum/ Server log files Website Information about your use of this website is collected using server access logs. The format of these logs follows an industry standard called \u0026ldquo;Apache Combined Log-Format\u0026rdquo; .\nThe collected information consists of the following:\nAnonymized client IP: The address from which you access the website. The last 12 bits of IP4 addresses, and the last 84 bits of IP6 addresses are randomized. Timestamp: The time and day of you accessing our website Request line: The path to the content your web browser is requesting. For example, when you load an image, the path might be \u0026ldquo;/products/image1.jpg\u0026rdquo;. Status code: A 3-digit code that signifies whether your request was successfull (200), or resulted in an error (e.g. 404). The Internet Assigned Numbers Authority has defined a range of other status codes, which are useful for website operators when troubleshooting. Referer: The website you visited before visiting www.gams.com User Agent: The browser, browser version, and operating system you use to access our website We use the information gathered to help us make our site more useful to visitors and to better understand how and when our site is used. We do not track or collect personally identifiable information or associate gathered data with any personally identifying information from other sources.\nBy using this website, you consent to the collection of this data in the manner and for the purpose described.\nCloud services We retain server logs for a period of three months to support the detection, investigation, and resolution of security incidents, abuse, and system faults. These logs include full IP addresses, timestamps, requested resources, and error messages. After three months, the IP addresses are anonymized, while the remaining log data may be retained for analytical or operational purposes. Access to these logs is strictly limited to authorized personnel and used exclusively for security-related purposes.\nThe processing of this data is based on our legitimate interest in ensuring the security and integrity of our systems and services, pursuant to Article 6(1)(f) GDPR.\nSocial media We include content provided by Twitter, and YouTube in our website. To increase your privacy, we only include this content as a static version. This means that Twitter and YouTube will not set any cookies on your computer, when you just visit our website, or otherwise track you. However, once you click on any links in content provided in this way, you will be using those services directly, and the privacy rules of those services will apply. To excecise your rights according to the GDPR, please contact the relevant services directly.\nYouTube Google LLC, 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA\nGoogle Privacy Policy Opt-out link Facebook Facebook Ireland Ltd., 4 Grand Canal Square, Grand Canal Harbour, Dublin 2, Ireland.\nFacebook Privacy Policy Opt-out link LinkedIn LinkedIn Privacy Policy Opt-out link Twitter Twitter Inc., 1355 Market Street, Suite 900, San Francisco, CA94104, USA.\nTwitter Privacy Policy Opt-out link Job applicant data We process and store data you give us when you apply for a position at GAMS. This includes your name, address, phone numbers, email, and all other personal data that is in the documents you send us with your application.\nCookies Our website does not use any tracking or advertising related cookies. The Miro gallery sets a necessary cookie to handle the interactive demos.\nLawfulness of data processing We collect and process your data to perform the contract and to provide our services, to improve and adapt our services and our website to your needs, to provide updates and upgrades, to send you notifications regarding our services, to issue invoices, and to collect our claims.\nArt. 6 I lit. (a) GDPR serves as a legal basis for processing operations for which we obtain consent for a specific processing purpose. If the processing of personal data is necessary for the performance of a contract, the processing is based on Art. 6 I lit. (b) GDPR. The same applies to such processing operations which are necessary for the implementation of pre-contractual measures, for example in cases of enquiries about our products or services. If we are subject to a legal obligation which makes it necessary to process personal data, for example to fulfil tax obligations, processing is based on Art. 6 I lit. (c) GDPR.\nFinally, processing operations could be based on Art. 6 I lit. (f) GDPR. Processing operations which are not covered by any of the above legal bases are based on this legal basis if the processing is necessary to safeguard our legitimate interests or those of a third party, provided that the interests, fundamental rights and freedoms of the data subject do not prevail. We are allowed to carry out such processing operations in particular because they have been specifically mentioned by the European legislation. As a rule, a legitimate interest can be assumed if the data subject is a customer of the data controller.\nIf the processing of personal data is based on Art. 6 I lit. (f) GDPR, our legitimate interest is the performance of our business activities and the fulfilment of legal obligations, insofar as the processing does not fall under Art. 6 I lit. (c) GDPR.\nWe process applicant data in accordance with Art. 88 GDPR in conjunction with § 26 BDSG (new), the relevant German federal data protection act.\nCategories of Recipients Processors\nWe forward various personal data to our processors as data controllers in the context of a data processing contract . We have ensured the security of your data by entering into agreements for contract data processing. Our processors can be divided into the following categories:\nProvision of services: These include newsletter dispatch, printing and dispatch of invoices, customer surveys, payment service providers\nOperation of services, maintenance and servicing of hardware and software.\nWe only disclose data to authorities and third parties in accordance with legal regulations or a court order. Information to authorities can be given on the basis of a legal regulation to avert danger or for criminal prosecution. Third parties will only receive information if a statutory provision provides for this.\nData Processing in third countries Google: We use Google\u0026rsquo;s services to store and process our email correspondence. We have entered a data processing agreement with Google for this purpose. All our data is hosted in Googles European Data Center. See the current Data Processing Amendment for more details.\nAmazon: Our customer database is hosted on Amazon AWS systems. We have entered a data processing agreement with AWS. See the current Data Processing Amendment for more details.\nPayrexx: For processing credit card payments we use the external payment provider Payrexx AG, Burgstrasse 18, CH-3600 Thun. No credit card information is stored on GAMS systems. Payrexx will process personal data to handle payments on behalf of GAMS. Further information can be found in the Payrexx privacy policy at https://www.payrexx.com/site/assets/files/3945/privacy_policy.pdf .\nPaytrace: In addition to Payrexx, we also use Paytrace for credit card payment processing (mostly for our non-European customers). No credit card information is stored on GAMS systems. Read more in the Paytrace Privacy Policy .\nMailchimp: We have entered a data processing agreement with The Rocket Science Group LLC who operate Mailchimp to handle our newsletter. See the current Data Processing Addendum for more details.\nGAMS Development Corp: When you sign up for services or digital content, your data is transferred to GAMS Development Corp. in order to create quotes or for contract fulfillment on the basis of Art. 6 (1) (b) GDPR.\nDuration of Data Storage We process and store personal data only for the period of time required to achieve the purpose of storage or if required by law:\nFor contract data, processing is restricted after termination of the contract, after the 10 year legal retention period in accordance with § 257 HGB and § 147 AO they are deleted.\nData that you enter as a job applicant during the recruitment process will be stored for a maximum of six months.\nFor customer correspondence, order and payment history, the statutory retention period of 6 years applies in accordance with § 257 HGB and § 147 AO.\nYour Rights Right of information and verification: You have the right to receive free information and confirmation of the personal data stored about you and a copy of this information at any time.\nRight of rectification: You have the right to demand the immediate correction of incorrect personal data concerning you. You also have the right to request the completion of incomplete personal data, including by means of a supplementary declaration, taking into account the purposes of the processing.\nRight of cancellation: You have the right to obtain the immediate deletion of personal data relating to you if one of the following reasons applies and provided that the processing is not necessary:\nthe personal data have been collected or otherwise processed for purposes for which they are no longer necessary You withdraw the consent on which the processing was based and there is no other legal basis for the processing. You object to the processing in accordance with Article 21(1) of the GDPR and there are no overriding legitimate reasons for processing, or you object to the processing in accordance with Article 21(2) of the GDPR. The personal data were processed unlawfully. The deletion of the personal data is necessary to comply with a legal obligation under European Union law or the law of the Member States to which we are subject. The personal data was collected in relation to information society services offered in accordance with Article 8 (1) of the GDPR. Right to restrict processing: You have the right to request the restriction of the processing if one of the following conditions is met:\nYou contest the accuracy of the personal data, for a period of time that allows us to verify the accuracy of the personal data. The processing is unlawful, you object to the deletion of the personal data and instead demand the restriction of the use of the personal data. We no longer need the personal data for the purposes of the processing, but you need it to assert, exercise or defend legal claims. You have lodged an objection to the processing in accordance with Art. 21 (1) GDPR and it is not yet clear whether our legitimate grounds outweigh yours. Rights of objection to the processing: You have the right to object at any time to the processing of personal data concerning you that is carried out on the basis of Art. 6 I (e) or (f) GDPR. In the event of an objection, we will no longer process the personal data unless we can demonstrate compelling reasons for processing that are worthy of protection and outweigh your interests, rights and freedoms, or unless the processing serves to assert, exercise or defend legal claims. You have the right to object at any time to the processing of personal data for the purpose of direct marketing.\nRight to revoke consent under data protection law: You have the right to revoke your consent to the processing of personal data at any time.\nRight of appeal to a regulatory authority: You have the right to appeal at any time to a regulatory authority in the Member State in which you are resident or working or in which the alleged infringement occurred, if you consider that the processing of personal data relating to you is contrary to the EU data protection regulation\nExistence of automatic decision making / profiling We do not carry out automatic decision making or profiling.\nGoogle Web Fonts For uniform representation of fonts, this page uses web fonts provided by Google. When you open a page, your browser loads the required web fonts into your browser cache to display texts and fonts correctly. For this purpose your browser has to establish a direct connection to Google servers. Google thus becomes aware that our web page was accessed via your IP address. The use of web fonts constitutes a justified interest pursuant to Art. 6 (1) (f) GDPR.\nIf your browser does not support web fonts, a standard font is used by your computer.\nFurther information about handling user data, can be found at https://developers.google.com/fonts/faq and in Google\u0026rsquo;s privacy policy at https://www.google.com/policies/privacy/ .\n","excerpt":"\u003ch1 id=\"privacy-policy\"\u003ePrivacy Policy\u003c/h1\u003e\n\u003ch5 id=\"may-2025\"\u003eMay 2025\u003c/h5\u003e\n\u003cbr\u003e\n\u003ch3 id=\"contact-information\"\u003eContact Information\u003c/h3\u003e\n\u003ch4 id=\"responsible-party\"\u003eResponsible party\u003c/h4\u003e\n\u003cp\u003eGAMS Software GmbH\u003cbr\u003e\nAugustinusstrasse 11b\u003cbr\u003e\n50226 Frechen\u003c/p\u003e\n\u003cp\u003eFor our organization we have appointed an external data privacy officer with the following contact details:\u003cbr\u003e\nDatenschutzberater.NRW GmbH\u003cbr\u003e\nDennis Manz\u003cbr\u003e\nHansaring 78\u003cbr\u003e\n50670 Köln\u003cbr\u003e\nTelefon: +49 (0) 221 29 27 29 0\u003cbr\u003e\nE-Mail: \u003ca href=\"mailto:datenschutz@datenschutzberater.nrw\"\u003edatenschutz@datenschutzberater.nrw\u003c/a\u003e\n\u003c/p\u003e\n\u003cbr\u003e\n\u003ch2 id=\"which-data-do-we-collect-and-process\"\u003eWhich data do we collect and process?\u003c/h2\u003e\n\n\n\n\n\n\n\n\n\n\n\n\n\u003cdiv class=\"accordion mb-3\" id=\"generatedAccordion\"\u003e\n\u003ch2 class=\"accordionHeader\" id=\"contractual-data\"\u003eContractual data\u003c/h2\u003e\n\u003cp\u003eWe collect, process, and store your personal data when you request a quote, or place an order with GAMS. Furthermore we store and process data related to the status of your order and to payment processing.\u003c/p\u003e","ref":"/about/privacy/","title":"Privacy Policy"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/pbussiek/","title":"Puma"},{"body":"","excerpt":"","ref":"/team/rreynolds/","title":"Rachel"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/rkuhlmann/","title":"Renke"},{"body":" Summary of Qualifications Robin Schuchmann holds a B.Sc. and M.Sc. in Engineering and Business Administration from the University of Hannover. In 2016 he joined GAMS as Operations Research Analyst. He is responsible for software development and project management in the area of mathematical programming. His core competencies are projects in the field of operations research that provide customers with powerful optimization software. Robin is one of the main developers of GAMS MIRO, a tool for the automated deployment of GAMS models. He regularly gives lectures at universities and international conferences.\nProfessional Profile 2018 – today Operations Research Analyst, GAMS Software GmbH, Braunschweig, Germany 2016 – 2017 Student Employee, GAMS Software GmbH., Braunschweig, Germany 2014 – 2016 Student Assistant, Institute for production management, Leibniz University Hannover, Germany Academic Degrees M.Sc. Industrial engineering, Leibniz University Hannover, 2017. B.Sc. Industrial engineering, Leibniz University Hannover, 2015. ","excerpt":"\u003cdiv class=\"container\"\u003e\n\u003cdiv class=\"row\"\u003e\n\u003cdiv class=\"col-md-3\"\u003e\n\u003cimg class=\"mb-3\" src=\"rschuchmann-profile-picture.jpg\" width=\"100%\"\u003e\n\u003c/div\u003e\n\u003cdiv class=\"col-md-8\"\u003e\n\u003ch2 id=\"summary-of-qualifications\"\u003eSummary of Qualifications\u003c/h2\u003e\n\u003cp\u003eRobin Schuchmann holds a B.Sc. and M.Sc. in Engineering and Business Administration from the University of Hannover. In 2016 he joined GAMS as Operations Research Analyst. He is responsible for software development and project management in the area of mathematical programming. His core competencies are projects in the field of operations research that provide customers with powerful optimization software.\nRobin is one of the main developers of GAMS MIRO, a tool for the automated deployment of GAMS models. He regularly gives lectures at universities and international conferences.\u003c/p\u003e","ref":"/team/rschuchmann/","title":"Robin"},{"body":"Integrated Solvers GAMS has all the functionality required to develop, debug, deploy, and maintain optimization models. A large set of mathematical model types (linear, mixed-integer, nonlinear, mixed-integer nonlinear, mixed complementary, etc.) can be formulated with GAMS. GAMS creates optimization problems from your models and data, and retrieves results for analysis and processing, but it does not solve the optimization problem.\nInstead, it uses so called solvers that have been connected to GAMS and are included in the GAMS system. Here is a brief description of each solver, the model types each solver is capable of solving, and the platforms supported by each solver. The GAMS Base Module includes all open-source solvers, some free solvers, and free links, and all other solvers in size limited versions. Although all these solvers are included in the GAMS System, some of them require a commercial license (for details visit our regular or academic price lists) and the usage is governed by our license agreement .\nIf you already have access to a particular solver you would like to use, you can instead purchase a GAMS/Solver-Link. Each link connects the GAMS Base Module to a particular solver, but does not include a license for the solver. It may be necessary to purchase a separate license from the solver vendor before the solver can be used.\nChoosing the right solver can involve a fair bit of trial and error, and in general, the performance of a specific solver cannot readily be predicted from problem size or other simple measures.\nWe strongly recommend testing alternative solvers to determine which offers the best tradeoff of price and performance for your needs.\nWe are happy to provide you with a free evaluation license for this purpose. ","excerpt":"\u003ch1 id=\"integrated-solvers\"\u003eIntegrated Solvers\u003c/h1\u003e\n\u003cp\u003eGAMS has all the functionality required to develop, debug, deploy, and maintain optimization models. A large set of mathematical model types (linear, mixed-integer, nonlinear, mixed-integer nonlinear, mixed complementary,  etc.) can be formulated with GAMS. \u003c/p\u003e\n\u003cp\u003eGAMS creates optimization problems from your models and data, and retrieves results for analysis and processing, but it does not \u003cem\u003esolve\u003c/em\u003e the optimization problem.\u003c/p\u003e","ref":"/products/solvers/","title":"Solvers"},{"body":"Stefan A Mann, PhD Stefan joined GAMS in 2019\nPublications Hobbies ","excerpt":"\u003ch1 id=\"stefan-a-mann-phd\"\u003eStefan A Mann, PhD\u003c/h1\u003e\n\u003cp\u003eStefan joined GAMS in 2019\u003c/p\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/smann/","title":"Stefan"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/svigerske/","title":"Stefan"},{"body":" Summary of Qualifications and Experience Stephen has a PhD in Mathematics from the Univeristy of New South Wales, Australia. His PhD focused on the application of optimisation techniques to solve robust planning and recovery problem in the airline industry. Following his PhD, Stephen worked as a postdoctoral researcher at the Zuse Institute Berlin, where he contributed to the development of the constraint integer programming solver SCIP. In 2017, Stephen was awarded an EPSRC fellowship to combine his experience with decomposition techniques and solver development to create a general Benders decomposition framework in SCIP. He continued his research in operations research and decomposition techniques at the Lancaster University (2017-2019) and the University of Exeter (2019-2022). In 2022, Stephen made the move from academia to industry as a senior research engineer at Quantagonia. Stephen joined GAMS in 2024 to support optimisation projects and the newly formed solver development team.\nProfessional Profile 2024 – today Operations Research Analyst, GAMS Software GmbH, Braunschweig, Germany (remote in UK) 2022 - 2024 Senior Research Engineer, Quantagonia GmbH, Frankfurt, Germany (remote in UK) 2019 - 2022 Lecturer and Senior Lecturer, University of Exeter, United Kingdom 2017 - 2019 EPSRC Research Fellow, Lancaster University, United Kingdom 2014 - 2017 Postdoctoral Researcher, Zuse Institute Berlin, Germany Academic Degrees PhD Mathematics. University of New South Wales, Sydney, Australia, 2014. Bachelor of Mathematics Advanced. University of Wollongong, Australia, 2007. ","excerpt":"\u003cdiv class=\"container\"\u003e\n\u003cdiv class=\"row\"\u003e\n\u003cdiv class=\"col-md-3\"\u003e\n\u003cimg class=\"mb-3\" src=\"smaher-profile-picture.jpg\" width=\"100%\"\u003e\n\u003c/div\u003e\n\u003cdiv class=\"col-md-8\"\u003e\n\u003ch2 id=\"summary-of-qualifications-and-experience\"\u003eSummary of Qualifications and Experience\u003c/h2\u003e\n\u003cp\u003eStephen has a PhD in Mathematics from the Univeristy of New South Wales, Australia. His PhD focused on the application of optimisation techniques to solve robust planning and recovery problem in the airline industry. Following his PhD, Stephen worked as a postdoctoral researcher at the Zuse Institute Berlin, where he contributed to the development of the constraint integer programming solver SCIP. In 2017, Stephen was awarded an EPSRC fellowship to combine his experience with decomposition techniques and solver development to create a general Benders decomposition framework in SCIP. He continued his research in operations research and decomposition techniques at the Lancaster University (2017-2019) and the University of Exeter (2019-2022). In 2022, Stephen made the move from academia to industry as a senior research engineer at Quantagonia. Stephen joined GAMS in 2024 to support optimisation projects and the newly formed solver development team.\u003c/p\u003e","ref":"/team/smaher/","title":"Stephen"},{"body":" Steven Dirkse, PhD Dr. Steven Dirkse received his PhD in Computer Science from UW-Madison in 1994. After a year of teaching math and CS at Calvin College, he joined the staff at GAMS Development in 1995, becoming Director of Optimization in 2003 and President in 2016. His primary focus has been in software development, notably solvers and solver links, data utilities, multi-threading, and quality control and performance testing. He has also consulted on optimization projects with several GAMS clients during this time.\nSteve has published in many leading journals, gives lectures at conferences, and is an active member of the community: e.g., he is a past Director and Secretary/Treasurer of the INFORMS Computing Society. As part of his thesis research he developed the PATH solver for Mixed Complementarity Problems (MCP), a robust, large-scale solver that was a great leap forward in solving MCP. For the work on the PATH solver he was awarded the Beale-Orchard-Hayes Prize in 1997.\n","excerpt":"\u003cdiv class=\"container\"\u003e\n\u003cdiv class=\"row\"\u003e\n\u003cdiv class=\"col-md-3 mb-3\"\u003e\n \u003cimg src=\"sdirkse-profile-picture.JPG\" width=\"100%\"\u003e\n\u003c/div\u003e\n\u003cdiv class=\"col-md-8\"\u003e\n\u003ch1 id=\"steven-dirkse-phd\"\u003eSteven Dirkse, PhD\u003c/h1\u003e\n\u003cp\u003eDr. Steven Dirkse received his PhD in Computer Science from UW-Madison in 1994. After a year of teaching math and CS at Calvin College, he joined the staff at GAMS Development in 1995, becoming Director of Optimization in 2003 and President in 2016. His primary focus has been in software development, notably solvers and solver links, data utilities, multi-threading, and quality control and performance testing. He has also consulted on optimization projects with several GAMS clients during this time.\u003c/p\u003e","ref":"/team/sdirkse/","title":"Steve"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/sbussieck/","title":"Susanne"},{"body":"","excerpt":"","ref":"/team/tcanbay/","title":"Tayyip"},{"body":"Marius Bützler Marius joined GAMS Software GmbH in August 2018. He takes care of the daily tasks of the sales and marketing area and supports the technical sales in upcoming projects. In 2019, Marius also started the Bachelor's degree in Business Information Management in Cologne. ","excerpt":"\u003ch1 id=\"marius-bützler\"\u003eMarius Bützler\u003c/h1\u003e\n\u003cdiv class =”container”\u003e\n \u003cdiv class=\"row align-items-center\"\u003e\n \n\n\n \u003cdiv class=\"col-md-2\"\u003e\n\n \u003cfigure\u003e\u003cimg src=\"/specialists/test-specialist2/scuba-diver.png\" height=\"200\"\u003e\n\u003c/figure\u003e\n\n\u003c/div\u003e\n\n\n\n \u003cdiv class=\"col-md-9\"\u003e\n\n \n\nMarius joined GAMS Software GmbH in August 2018. He takes care of the daily tasks of the sales and marketing area and supports the technical sales in upcoming projects.\nIn 2019, Marius also started the Bachelor's degree in Business Information Management in Cologne.\n\n\n\u003c/div\u003e\n\n\n\n \u003c/div\u003e\n\u003c/div\u003e\n\n\u003chr\u003e","ref":"/specialists/test-specialist2/","title":"Test2"},{"body":"Marius Bützler Marius joined GAMS Software GmbH in August 2018. He takes care of the daily tasks of the sales and marketing area and supports the technical sales in upcoming projects. In 2019, Marius also started the Bachelor's degree in Business Information Management in Cologne. ","excerpt":"\u003ch1 id=\"marius-bützler\"\u003eMarius Bützler\u003c/h1\u003e\n\u003cdiv class =”container”\u003e\n \u003cdiv class=\"row align-items-center\"\u003e\n \n\n\n \u003cdiv class=\"col-md-2\"\u003e\n\n \u003cfigure\u003e\u003cimg src=\"/specialists/test-specialist3/scuba-diver.png\" height=\"200\"\u003e\n\u003c/figure\u003e\n\n\u003c/div\u003e\n\n\n\n \u003cdiv class=\"col-md-9\"\u003e\n\n \n\nMarius joined GAMS Software GmbH in August 2018. He takes care of the daily tasks of the sales and marketing area and supports the technical sales in upcoming projects.\nIn 2019, Marius also started the Bachelor's degree in Business Information Management in Cologne.\n\n\n\u003c/div\u003e\n\n\n\n \u003c/div\u003e\n\u003c/div\u003e\n\n\u003chr\u003e","ref":"/specialists/test-specialist3/","title":"Test3"},{"body":"Marius Bützler Marius joined GAMS Software GmbH in August 2018. He takes care of the daily tasks of the sales and marketing area and supports the technical sales in upcoming projects. In 2019, Marius also started the Bachelor's degree in Business Information Management in Cologne. ","excerpt":"\u003ch1 id=\"marius-bützler\"\u003eMarius Bützler\u003c/h1\u003e\n\u003cdiv class =”container”\u003e\n \u003cdiv class=\"row align-items-center\"\u003e\n \n\n\n \u003cdiv class=\"col-md-2\"\u003e\n\n \u003cfigure\u003e\u003cimg src=\"/specialists/test-specialist4/scuba-diver.png\" height=\"200\"\u003e\n\u003c/figure\u003e\n\n\u003c/div\u003e\n\n\n\n \u003cdiv class=\"col-md-9\"\u003e\n\n \n\nMarius joined GAMS Software GmbH in August 2018. He takes care of the daily tasks of the sales and marketing area and supports the technical sales in upcoming projects.\nIn 2019, Marius also started the Bachelor's degree in Business Information Management in Cologne.\n\n\n\u003c/div\u003e\n\n\n\n \u003c/div\u003e\n\u003c/div\u003e\n\n\u003chr\u003e","ref":"/specialists/test-specialist4/","title":"Test3"},{"body":"The CONOPT Algorithm CONOPT is a solver for large-scale nonlinear optimization (NLP) originally developed and maintained by ARKI Consulting \u0026amp; Development A/S in Bagsvaerd, Denmark. CONOPT is a feasible path solver based on the old proven GRG method with many newer extensions. CONOPT has been designed to be efficient and reliable for a broad class of models. The original GRG method helps achieve reliability and speed for models with a large degree of nonlinearity, i.e. difficult models, and CONOPT is often preferable for very nonlinear models and for models where feasibility is difficult to achieve. Extensions to the GRG method such as preprocessing, a special phase 0, linear mode iterations, and a sequential linear programming and a sequential quadratic programming component makes CONOPT efficient on easier and mildly nonlinear models as well. The multi-method architecture of CONOPT combined with build-in logic for dynamic selection of the most appropriate method makes CONOPT a strong all-round NLP solver.\nAll components of CONOPT have been designed for large and sparse models. Models with over 10,000 constraints are routinely being solved. Specialized models with up to 1 million constraints have also been solved with CONOPT. The limiting factor is difficult to define. It is a combination of the number of constraints or variables with the number of super basic variables, a measure of the degrees of freedom around the optimal point. Models with over 500 super basic variables can sometimes be slow.\n","excerpt":"\u003ch1 id=\"the-conopt-algorithm\"\u003eThe CONOPT Algorithm\u003c/h1\u003e\n\u003cp\u003eCONOPT is a solver for large-scale nonlinear optimization (NLP) originally developed and maintained by ARKI Consulting \u0026amp; Development A/S in Bagsvaerd, Denmark. CONOPT is a feasible path solver based on the old proven GRG method with many newer extensions. CONOPT has been designed to be efficient and reliable for a broad class of models. The original GRG method helps achieve reliability and speed for models with a large degree of nonlinearity, i.e. difficult models, and CONOPT is often preferable for very nonlinear models and for models where feasibility is difficult to achieve. Extensions to the GRG method such as preprocessing, a special phase 0, linear mode iterations, and a sequential linear programming and a sequential quadratic programming component makes CONOPT efficient on easier and mildly nonlinear models as well. The multi-method architecture of CONOPT combined with build-in logic for dynamic selection of the most appropriate method makes CONOPT a strong all-round NLP solver.\u003c/p\u003e","ref":"/products/conopt/algorithm/","title":"The CONOPT Algorithm"},{"body":" The Archive Presentations given by GAMS staff GAMS Advertisements GAMS Flyer and Datasheets Bruce McCarl Newsletters ","excerpt":"\u003cdiv style=\"height:55vh\"\u003e\n\u003ch2\u003eThe Archive\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"presentations\"\u003ePresentations given by GAMS staff\u003c/a\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"ads\"\u003eGAMS Advertisements\u003c/a\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"flyer\"\u003eGAMS Flyer and Datasheets\u003c/a\u003e\n\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"/newsletter/mccarl/\"\u003eBruce McCarl Newsletters\u003c/a\u003e\n\u003c/li\u003e\n\u003c/ul\u003e\n\u003c/div\u003e","ref":"/archives/","title":"The GAMS Archives"},{"body":"The GAMS Team ","excerpt":"\u003ch2 id=\"the-gams-team\"\u003eThe GAMS Team\u003c/h2\u003e","ref":"/team/","title":"The Team"},{"body":"Thomas Maindl Lorem ipsum ","excerpt":"\u003ch1 id=\"thomas-maindl\"\u003eThomas Maindl\u003c/h1\u003e\n\u003cdiv class =”container”\u003e\n \u003cdiv class=\"row align-items-center\"\u003e\n \n\n\n \u003cdiv class=\"col-md-2\"\u003e\n\n \u003cfigure\u003e\u003cimg src=\"/specialists/maindl_sdb/SDB_logo.png\" height=\"200\"\u003e\n\u003c/figure\u003e\n\n\u003c/div\u003e\n\n\n\n \u003cdiv class=\"col-md-9\"\u003e\n\n \n\nLorem ipsum\n\n\n\u003c/div\u003e\n\n\n\n \u003c/div\u003e\n\u003c/div\u003e\n\n\u003chr\u003e","ref":"/specialists/maindl_sdb/","title":"Thomas Maindl"},{"body":"Title Publications Hobbies ","excerpt":"\u003ch1 id=\"title\"\u003eTitle\u003c/h1\u003e\n\u003ch4 id=\"publications\"\u003ePublications\u003c/h4\u003e\n\u003ch5 id=\"hobbies\"\u003eHobbies\u003c/h5\u003e","ref":"/team/vjha/","title":"Vaibhavnath"},{"body":"Versions of CONOPT The latest version of CONOPT3 is version 3.17. A letter after 3.17 indicates a bug-fix level. The latest version of CONOPT4 is version 4.35. A letter after 4.35 indicates a bug-fix level. ","excerpt":"\u003ch1 id=\"versions-of-conopt\"\u003eVersions of CONOPT\u003c/h1\u003e\n\u003cul\u003e\n\u003cli\u003eThe latest version of CONOPT3 is version 3.17. A letter after 3.17 indicates a bug-fix level.\u003c/li\u003e\n\u003cli\u003eThe latest version of CONOPT4 is version 4.35. A letter after 4.35 indicates a bug-fix level.\u003c/li\u003e\n\u003c/ul\u003e","ref":"/products/conopt/versions/","title":"Versions of CONOPT"}]