Logga in

Priserna visas inklusive moms och du betalar med Klarna


Priserna visas exklusive moms, du kan betala med Klarna eller faktura

Priserna visas inklusive moms och du betalar med Klarna


Priserna visas exklusive moms, du kan betala med Klarna eller faktura

Varukorg

Varukorgen är tom!

Varukorgen inkl. moms 0 kr


Elektronisk distribution

Frakt inkl. moms 0 kr


Varav moms (6 %) 0 kr

Varav moms (25 %) 0 kr

Öresutjämning 0 kr


Att betala inkl. moms 0 kr


Till kassan

An Introduction to Continuous Optimization

Foundations and Fundamental Algorithms

Skickas följande arbetsdag

Revised with second printing, 2018. Optimization, or mathematical programming, is a fundamental subject within decision science and operations research in which mathematical decision models are constructed, analyzed, and solved. Andreasson provides a basis for the analysis of optimization models and candidate optimal solutions for continuous optimization models. Elementary but rigorous mathematics and linear algebra underlie the workings of convexity and duality, and necessary/sufficient loca...

Läs mer

Revised with second printing, 2018. Optimization, or mathematical programming, is a fundamental subject within decision science and operations research in which mathematical decision models are constructed, analyzed, and solved. Andreasson provides a basis for the analysis of optimization models and candidate optimal solutions for continuous optimization models. Elementary but rigorous mathematics and linear algebra underlie the workings of convexity and duality, and necessary/sufficient local/global optimality conditions for continuous optimization problems. Natural algorithms are developed from these optimality conditions, and their most important convergence characteristics are analyzed. The author answers many more “Why?” and “Why not?” than “How?” questions. The author provides lecture, exercise and reading material for a first course on continuous optimization and mathematical programming. It is geared towards third-year students, and has already been used for nearly ten years. The preface describes the main changes to the first edition. The book can be used in mathematical optimization courses at engineering, economics, and business schools. It is a perfect starting point for anyone who wishes to develop an understanding of the subject before actually applying it.

Stäng
    • 1
      I Introduction
      • 1
        3
        Modelling and classification
        • 1.1
          3
          Modelling of optimization problems
        • 1.2
          11
          A quick glance at optimization history
        • 1.3
          13
          Classification of optimization models
        • 1.4
          16
          Conventions
        • 1.5
          18
          Applications and modelling examples
        • 1.6
          19
          On optimality conditions
        • 1.7
          20
          Soft and hard constraints
          • 1.7.1
            20
            Definitions
          • 1.7.2
            21
            A derivation of the exterior penalty function
        • 1.8
          23
          A road map through the material
        • 1.9
          28
          Background and didactics
        • 1.10
          30
          Illustrating the theory
        • 1.11
          31
          Notes and further reading
  • 33
    II Fundamentals
      • 2
        35
        Analysis and algebra—A summary
        • 2.1
          35
          Reductio ad absurdum
        • 2.2
          36
          Linear algebra
        • 2.3
          39
          Analysis
      • 3
        43
        Convex analysis
        • 3.1
          43
          Convexity of sets
        • 3.2
          45
          Polyhedral theory
          • 3.2.1
            45
            Convex hulls
          • 3.2.2
            49
            Polytopes
          • 3.2.3
            50
            Polyhedra
          • 3.2.4
            56
            Fourier Elimination
          • 3.2.5
            60
            Farkas’ Lemma
          • 3.2.6
            63
            Minkowski–Weyl Theorem
        • 3.3
          65
          Conv ex functions
        • 3.4
          74
          Application: the pro jection of a vector onto a convex set
        • 3.5
          76
          Notes and further reading
  • 79
    III Optimality Conditions
      • 4
        81
        An introduction to optimality conditions
        • 4.1
          81
          Local and global optimality
        • 4.2
          84
          Existence of optimal solutions
          • 4.2.1
            84
            A classic result
          • 4.2.2
            87
            *Non-standard results
          • 4.2.3
            89
            Special optimal solution sets
        • 4.3
          91
          Optimality conditions for unconstrained optimization
        • 4.4
          94
          Optimality conditions for optimization over convex sets
        • 4.5
          102
          Near-optimality in convex optimization
        • 4.6
          103
          Applications
          • 4.6.1
            103
            Continuity of convex functions
          • 4.6.2
            105
            The Separation Theorem
          • 4.6.3
            115
            Euclidean pro jection
          • 4.6.4
            116
            Fixed point theorems
        • 4.7
          122
          Notes and further reading
      • 5
        125
        Optimality conditions
        • 5.1
          125
          Relations between optimality conditions and CQs at a glance
        • 5.2
          126
          A note of caution
        • 5.3
          128
          Geometric optimality conditions
        • 5.4
          134
          The Fritz John conditions
        • 5.5
          141
          The Karush–Kuhn–Tucker conditions
        • 5.6
          145
          Proper treatment of equality constraints
        • 5.7
          147
          Constraint qualifications
          • 5.7.1
            147
            Mangasarian–Fromovitz CQ (MF CQ)
          • 5.7.2
            148
            Slater CQ
          • 5.7.3
            148
            Linear independence CQ (LICQ)
          • 5.7.4
            149
            Affine constraints
        • 5.8
          150
          Sufficiency of the KKT conditions under convexity
        • 5.9
          152
          Applications and examples
        • 5.10
          154
          Notes and further reading
      • 6
        157
        Lagrangian duality
        • 6.1
          157
          The relaxation theorem
        • 6.2
          158
          Lagrangian duality
          • 6.2.1
            158
            Lagrangian relaxation and the dual problem
          • 6.2.2
            163
            Global optimality conditions
          • 6.2.3
            165
            Strong duality for convex programs
          • 6.2.4
            170
            Strong duality for linear and quadratic programs
          • 6.2.5
            172
            Two illustrative examples
        • 6.3
          174
          Differentiability properties of the dual function
          • 6.3.1
            174
            Subdifferentiability of convex functions
          • 6.3.2
            178
            Differentiability of the Lagrangian dual function
        • 6.4
          180
          *Subgradient optimization methods
          • 6.4.1
            180
            Convex problems
          • 6.4.2
            186
            Application to the Lagrangian dual problem
          • 6.4.3
            189
            The generation of ascent directions
        • 6.5
          190
          *Obtaining a primal solution
          • 6.5.1
            191
            Differentiability at the optimal solution
          • 6.5.2
            192
            Everett’s Theorem
        • 6.6
          194
          *Sensitivity analysis
          • 6.6.1
            194
            Analysis for convex problems
          • 6.6.2
            196
            Analysis for differentiable problems
        • 6.7
          197
          Applications
          • 6.7.1
            197
            Electrical networks
          • 6.7.2A Lagrangian relaxation of the traveling salesman
        • 201
          problem
        • 6.8
          206
          Notes and further reading
  • 209
    IV Linear Programming
      • 7
        211
        Linear programming: An introduction
        • 7.1
          211
          The manufacturing problem
        • 7.2
          212
          A linear programming model
        • 7.3
          213
          Graphical solution
        • 7.4
          213
          Sensitivity analysis
          • 7.4.1
            214
            An increase in the number of large pieces available
          • 7.4.2
            215
            An increase in the number of small pieces available
          • 7.4.3
            216
            A decrease in the price of the tables
        • 7.5
          217
          The dual of the manufacturing problem
          • 7.5.1
            217
            A competitor
          • 7.5.2
            217
            A dual problem
          • 7.5.3
            218
            Interpretations of the dual optimal solution
      • 8
        219
        Linear programming models
        • 8.1
          219
          Linear programming modelling
        • 8.2
          224
          The geometry of linear programming
          • 8.2.1
            225
            Standard form
          • 8.2.2Basic feasible solutions and the Representation The-
        • 228
          orem
          • 8.2.3
            235
            Adjacent extreme points
        • 8.3
          237
          Notes and further reading
      • 9
        239
        The simplex method
        • 9.1
          239
          The algorithm
          • 9.1.1
            240
            A BFS is known
          • 9.1.2
            247
            A BFS is not known: phase I & II
          • 9.1.3
            251
            Alternative optimal solutions
        • 9.2
          251
          Termination
        • 9.3
          252
          Computational complexity
        • 9.4
          253
          Notes and further reading
      • 10
        255
        LP duality and sensitivity analysis
        • 10.1
          255
          Introduction
        • 10.2
          256
          The linear programming dual
          • 10.2.1
            257
            Canonical form
          • 10.2.2
            257
            Constructing the dual
        • 10.3
          261
          Linear programming duality theory
          • 10.3.1
            261
            Weak and strong duality
          • 10.3.2
            264
            Complementary slackness
          • 10.3.3
            268
            An alternative derivation of the dual linear program
        • 10.4
          268
          The dual simplex method
        • 10.5
          272
          Sensitivity analysis
          • 10.5.1
            273
            Perturbations in the objective function
          • 10.5.2
            274
            Perturbations in the right-hand side coefficients
          • 10.5.3
            275
            Addition of a variable or a constraint
        • 10.6
          277
          Column generation in linear programming
          • 10.6.1
            277
            The minimum cost multi-commodity network flow problem
          • 10.6.2
            280
            The column generation principle
          • 10.6.3
            282
            An algorithm instance
        • 10.7
          284
          Notes and further reading
  • 287
    V Algorithms
      • 11
        289
        Unconstrained optimization
        • 11.1
          289
          Introduction
        • 11.2
          291
          Descent directions
          • 11.2.1
            291
            Introduction
          • 11.2.2
            293
            Newton’s method and extensions
          • 11.2.3
            297
            Least-squares problems and the Gauss–Newton al- gorithm
        • 11.3
          299
          The line search problem
          • 11.3.1
            299
            A characterization of the line search problem
          • 11.3.2
            300
            Approximate line search strategies
        • 11.4
          302
          Convergent algorithms
        • 11.5
          305
          Finite termination criteria
        • 11.6
          307
          A comment on non-differentiability
        • 11.7
          309
          Trust region methods
        • 11.8
          310
          Conjugate gradient methods
          • 11.8.1
            310
            Conjugate directions
          • 11.8.2
            311
            Conjugate direction methods
          • 11.8.3
            313
            Generating conjugate directions
          • 11.8.4
            313
            Conjugate gradient methods
          • 11.8.5
            316
            Extension to non-quadratic problems
        • 11.9
          317
          A quasi-Newton method: DFP
        • 11.10
          320
          onvergence rates
        • 11.11
          321
          mplicit functions
        • 11.12
          322
          otes and further reading
      • 12
        323
        Feasible-direction methods
        • 12.1
          323
          Feasible-direction methods
        • 12.2
          325
          The Frank–Wolfe algorithm
        • 12.3
          328
          The simplicial decomposition algorithm
        • 12.4
          331
          The gradient pro jection algorithm
        • 12.5
          337
          Application: traffic equilibrium
        • 12.5.1
          337
          Model analysis
          • 12.5.2
            340
            Algorithms and a numerical example
        • 12.6
          341
          Algorithms given by closed maps
          • 12.6.1
            341
            Algorithmic maps
          • 12.6.2
            344
            Closed maps
          • 12.6.3
            345
            A convergence theorem
          • 12.6.4
            348
            Additional results on composite maps
          • 12.6.5
            352
            Convergence of some algorithms defined by closed descent maps
        • 12.7
          353
          Active set methods
          • 12.7.1
            354
            The methods of Zoutendijk, Topkis, and Veinott
          • 12.7.2
            359
            A primal–dual active set method for polyhedral constraints
          • 12.7.3
            362
            Rosen’s gradient pro jection algorithm
        • 12.8
          366
          Reduced gradient algorithms
          • 12.8.1
            366
            Introduction
          • 12.8.2
            367
            The reduced gradient algorithm
          • 12.8.3
            369
            Convergence analysis
        • 12.9
          370
          Notes and further reading
      • 13
        373
        Constrained optimization
        • 13.1
          373
          Penalty methods
          • 13.1.1
            374
            Exterior penalty methods
          • 13.1.2
            378
            Interior penalty methods
          • 13.1.3
            381
            Computational considerations
          • 13.1.4
            382
            Applications and examples
        • 13.2
          385
          Sequential quadratic programming
          • 13.2.1
            385
            Introduction
          • 13.2.2
            388
            A penalty-function based SQP algorithm
          • 13.2.3
            393
            A numerical example on the MSQP algorithm
          • 13.2.4
            394
            On recent developments in SQP algorithms
        • 13.3
          394
          A summary and comparison
        • 13.4
          395
          Notes and further reading
  • 397
    VI Exercises
      • 14
        399
        Exercises
        • 399
          Chapter 1: Modelling and classification
        • 402
          Chapter 3: Convexity
        • 406
          Chapter 4: An introduction to optimality conditions
        • 412
          Chapter 5: Optimality conditions
        • 417
          Chapter 6: Lagrangian duality
        • 422
          Chapter 8: Linear programming models
        • 424
          Chapter 9: The simplex method
        • 426
          Chapter 10: LP duality and sensitivity analysis
        • 431
          Chapter 11: Unconstrained optimization
        • 436
          Chapter 12: Feasible direction methods
        • 439
          Chapter 13: Constrained optimization
      • 15
        443
        Answers to the exercises
        • 443
          Chapter 1: Modelling and classification
        • 446
          Chapter 3: Convexity
        • 449
          Chapter 4: An introduction to optimality conditions
        • 452
          Chapter 5: Optimality conditions
        • 455
          Chapter 6: Lagrangian duality
        • 457
          Chapter 8: Linear programming models
        • 459
          Chapter 9: The simplex method
        • 462
          Chapter 10: LP duality and sensitivity analysis
        • 464
          Chapter 11: Unconstrained optimization
        • 466
          Chapter 12: Optimization over convex sets
        • 467
          Chapter 13: Constrained optimization
    • 469
      References
    • 487
      Index
Information

Språk:

Engelska

ISBN:

9789144115290

Utgivningsår:

2005

Revisionsår:

2016

Artikelnummer:

32217-03

Upplaga:

Tredje

Sidantal:

508
 ;