Amazon cover image
Image from Amazon.com

An introduction to applied optimal control / Greg Knowles.

By: Material type: TextTextSeries: Mathematics in science and engineering ; v. 159.Publication details: New York : Academic Press, 1981.Description: 1 online resource (x, 180 pages) : illustrationsContent type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9780080956657
  • 0080956653
  • 9780124169609
  • 0124169600
Subject(s): Additional physical formats: Print version:: Introduction to applied optimal control.DDC classification:
  • 629.8/312 22
LOC classification:
  • QA402.3 .K56 1981eb
Online resources:
Contents:
Front Page; An Introduction to Applied Optimal Control; Copyright Page; Contents; Preface; Chapter I. Examples of Control Systems; the Control Problem; General Form of the Control Problem; Chapter II. The General Linear Time Optimal Problem; 1. Introduction; 2. Applications of the Maximum Principle; 3. Normal Systems-Uniqueness of the Optimal Control; 4. Further Examples of Time Optimal Control; 5. Numerical Computation of the Switching Times; References; Chapter III. The Pontryagin Maximum Principle; 1. The Maximum Principle; 2. Classical Calculus of Variations
3. More Examples of the Maximum PrincipleReferences; Chapter IV. The General Maximum Principle; Control Problems with Terminal Payoff; 1. Introduction; 2. Control Problems with Terminal Payoff; 3. Existence of Optimal Controls; References; Chapter V. Numerical Solution of Two-Point Boundary-Value Problems; 1. Linear Two-Point Boundary-Value Problems; 2. Nonlinear Shooting Methods; 3. Nonlinear Shooting Methods: Implicit Boundary Conditions; 4. Quasi-Linearization; 5. Finite-Difference Schemes and Multiple Shooting; 6. Summary; References
Chapter VI. Dynamic Programming and Differential Games1. Discrete Dynamic Pogramming; 2. Continuous Dynamic Rogramming-Control Problems; 3. Continuous Dynamic Programming-Differential Games; References; Chapter VII. Controllability and Observability; 1. Controllable Linear Systems; 2. Observability; References; Chapter VIII. State-Constrained Control Problems; 1. The Restricted Mmimum Principle; 2. Jump Conditions; 3. The Continuous Wheat Trading Model without Shortselling; 4. Some Models in Production and Inventory Control; References
Chapter IX. Optimal Control of Systems Governed by Partial Differential Equations1. Some Examples of Elliptic Control Problems; 2. Necessary and Sufficient Conditions for Optimality; 3. Boundary Control and Approximate Controllability of Elliptic Systems; 4. The Control of Systems Governed by Parabolic Equations; 5. Time Optimal Control; 6. Approximate Controllability for Parabolic Problems; References; Appendix I. Geometry of Rn; Appendix II. Existence of Time Optimal Controls and the Bang-Bang Principle; Appendix III. Stability; Index
Action note:
  • digitized 2010 HathiTrust Digital Library committed to preserve
Summary: An introduction to applied optimal control.
Holdings
Item type Current library Collection Call number Status Date due Barcode Item holds
eBook eBook e-Library EBSCO Computers Available
Total holds: 0

Includes bibliographical references and index.

Use copy Restrictions unspecified star MiAaHDL

Electronic reproduction. [Place of publication not identified] : HathiTrust Digital Library, 2010. MiAaHDL

Master and use copy. Digital master created according to Benchmark for Faithful Digital Reproductions of Monographs and Serials, Version 1. Digital Library Federation, December 2002. MiAaHDL

http://purl.oclc.org/DLF/benchrepro0212

digitized 2010 HathiTrust Digital Library committed to preserve pda MiAaHDL

Print version record.

Front Page; An Introduction to Applied Optimal Control; Copyright Page; Contents; Preface; Chapter I. Examples of Control Systems; the Control Problem; General Form of the Control Problem; Chapter II. The General Linear Time Optimal Problem; 1. Introduction; 2. Applications of the Maximum Principle; 3. Normal Systems-Uniqueness of the Optimal Control; 4. Further Examples of Time Optimal Control; 5. Numerical Computation of the Switching Times; References; Chapter III. The Pontryagin Maximum Principle; 1. The Maximum Principle; 2. Classical Calculus of Variations

3. More Examples of the Maximum PrincipleReferences; Chapter IV. The General Maximum Principle; Control Problems with Terminal Payoff; 1. Introduction; 2. Control Problems with Terminal Payoff; 3. Existence of Optimal Controls; References; Chapter V. Numerical Solution of Two-Point Boundary-Value Problems; 1. Linear Two-Point Boundary-Value Problems; 2. Nonlinear Shooting Methods; 3. Nonlinear Shooting Methods: Implicit Boundary Conditions; 4. Quasi-Linearization; 5. Finite-Difference Schemes and Multiple Shooting; 6. Summary; References

Chapter VI. Dynamic Programming and Differential Games1. Discrete Dynamic Pogramming; 2. Continuous Dynamic Rogramming-Control Problems; 3. Continuous Dynamic Programming-Differential Games; References; Chapter VII. Controllability and Observability; 1. Controllable Linear Systems; 2. Observability; References; Chapter VIII. State-Constrained Control Problems; 1. The Restricted Mmimum Principle; 2. Jump Conditions; 3. The Continuous Wheat Trading Model without Shortselling; 4. Some Models in Production and Inventory Control; References

Chapter IX. Optimal Control of Systems Governed by Partial Differential Equations1. Some Examples of Elliptic Control Problems; 2. Necessary and Sufficient Conditions for Optimality; 3. Boundary Control and Approximate Controllability of Elliptic Systems; 4. The Control of Systems Governed by Parabolic Equations; 5. Time Optimal Control; 6. Approximate Controllability for Parabolic Problems; References; Appendix I. Geometry of Rn; Appendix II. Existence of Time Optimal Controls and the Bang-Bang Principle; Appendix III. Stability; Index

An introduction to applied optimal control.

Added to collection customer.56279.3

Powered by Koha