my friend prays

my friend prays

rss

Minggu, 29 Agustus 2010

Sejarah Teori Pengaturan / Kendali

 Sumber : Control Theory Timeline
Please note the following:
The five areas of control theory discussed below are inspired from the latest materials and survey papers available and do not by any means try to cover the multi-disciplinary and broad area of control systems.
The materials outlined below are extracted from the standard resources and the references therein. Related and appropriate references are cited but they are far from being complete.
Control Theory: A Quick Overview
Adaptive Control
Filtering and Stochastic Control
H-Infinity Control
Linear Matrix Inequality
Nonlinear Control
References
Back to Top
Control Theory: A Quick Overview
1642-1754
The work of I. Newton (1642-1727) and G.W. Leibniz (1646-1716), the brothers Bernoulli (late 1600’s and early 1700’s), J. F. Riccati (1676-1754) led to the discovery of the infinitesimal calculus which in turn helped the development of differential equations theory. ([26])


1736-1865
J. L. Lagrange (1736-1813) and W. R. Hamilton (1805-1865) established the use of differential equations in analyzing the motion of dynamical systems. ([26])


1868
J. C. Maxwell analyzed the stability of Watt’s flyball governor. ([26])


1877
E. J. Routh provided a numerical technique for determining when a characteristic equation has stable roots. ([193])
I. I. Vishnegradsky analyzed the stability of regulators using differential equations independently of Maxwell. ([194])


1892
A. M. Lyapunov studied the stability of nonlinear differential equations using a generalized notion of energy. ([1], [26])


1892-1898
O. Heaviside invented operational calculus and studied the transient behavior of systems, introducing a notion equivalent to that of the transfer function. ([26])


1893
A. B. Stodola studied the regulation of a water turbine using the techniques of Vishnegradsky. ([26])


1895
A. Hurwitz solved independently the problem of determining the stability of the characteristic equation. ([195])


1920-1939
P. S. de Laplace (1749—1827), J. Fourier (1768—1830), A.L. Cauchy (1789—1857) developed the frequency domain approaches at Bell Telephone Laboratories, and explored and used these in communication systems. ([26])


1922
N. Minorsky introduced his three-term controller for the steering of ships, thereby becoming the first to use the PID controller. He considered nonlinear effects in the closed-loop system. ([199])


1927
H. S. Black demonstrated the usefulness of negative feedback. ([196])


1932
H. Nyquist developed Regeneration Theory for the design of stable amplifiers and derived his Nyquist Stability Criterion based on the polar plot of a complex function. ([197])


1938
H.W. Bode used the magnitude and phase frequency response plot of a complex function and investigated closed-loop stability using the notions of phase and gain margin. ([198])


1941
A. C. Hall recognized the deleterious effects of ignoring noise in control system design. ([200])
A. N. Kolmogorov provided a theory for discrete time stationary stochastic processes. ([203])


1942
N. Wiener analyzed information processing systems using models of stochastic processes and developed a statistically optimal filter for stationary continuous-time signals that improved the signal-to-noise ratio in a communication system while working in the frequency domain. ([159])


1945-1955
The first textbooks on Control Theory appeared which discussed straightforward design tools and provided great insight and guaranteed solutions to design problems. ([204], [205], [206], [207], [208])


1947
N. B. Nichols developed his Nichols chart for the design of feedback systems. ([26], [201])


1948
W. R. Evans presented his root locus technique, which provided a direct way to determine the closed-loop pole locations in the s-plane. ([202])


1950’s
C. E. Shannon, at Bell Labs, revealed the importance of sampled data techniques in the processing of signals. ([191])


1952
J. R. Ragazzini, G. Franklin, and L. A. Zadeh developed the theory of sampled data systems. ([214], [215])


1955
Tsypkin used the phase plane for nonlinear controls design. ([7], [8])


1957
R. Bellman applied dynamic programming to the optimal control of discrete-time systems, demonstrating that the natural direction for solving optimal control problems is backwards in time. His procedure resulted in closed-loop, generally nonlinear, feedback schemes. ([209])


1958
L. S. Pontryagin had developed his maximum principle, which solved optimal control problems relying on the calculus of variations developed by L. Euler (1707-1783). He solved the minimum-time problem, deriving an on/off relay control law as the optimal control. ([210])


1960
R. E. Kalman and J. E. Bertram publicized the vital work of Lyapunov in the time domain control of nonlinear systems. ([211])
R. E. Kalman discussed the optimal control of systems, providing the design equations for the Linear Quadratic Regulator (LQR). ([88])
R. E. Kalman developed optimal filtering and estimation theory, providing the design equations for the discrete Kalman Filter. ([81])
E. I. Jury advanced the theory of sampled data systems. ([216])


1961
R. E. Kalman and R. S. Bucy developed the continuous Kalman Filter. ([82])


1963
B. C. Kuo provided analysis and synthesis of sampled-data control systems. ([217])


1964
J. Kudrewicz formalized the use of frequency-domain techniques to systems with simple types of nonlinearities using the describing function approach which relies on the Nyquist criterion. ([26])


1964-1966
G. Zames ([11] [12]), I. W. Sandberg ([13]) , K. S. Narendra ([212]) , C. A. Desoer ([213]) , and others extended the work of Popov and Lyapunov in nonlinear stability.


1966
G. Zames presented the small gain theorem. ([11], [12])


1970
K. J. Åström established the importance of digital controls in process applications. ([218], [219])


1970’s
H. H. Rosenbrock ([190]) , A. G. J. MacFarlane and I. Postlethwaite ([189]) initiated a great deal of activity to extend classical frequency-domain techniques and the root locus to multivariable systems.


1976
D. Youla, H. Jabr, and J. Bongiorno introduced the parameterization of all stabilizing controllers for a particular system in a very effective manner. ([184], [185])


1981
J. Doyle and G. Stein ([188]) , M. G. Safonov, A. J. Laub, and G. L. Hartmann ([188]) showed the importance of the singular value plots versus frequency in robust multivariable design. Using these plots, many of the classical frequency-domain techniques can be incorporated into modern design.


1986
M. Athans pursued the importance of the singular value plots versus frequency in robust multivariable design in aircraft and process control. ([186])

Back to Top
Adaptive Control
Mid-1950’s
Significant growth of interests in adaptive control with flight control a major driving force ([63]).


1950’s
The idea to ignore uncertainty and treat estimates as true values is the so-called certainty equivalence principle ([72]). See [97] for a discussion on the relationship between the separation principle and the certainty equivalence principle. It should be mentioned that the certainty equivalence principle was discussed in the economics literature in the late 50’s ([98]).


1951
A self-optimizing controller was proposed for driving a combustion engine under optimal conditions toward optimal working conditions, and the design was successfully flight tested ([71]). This ushered in a new era in the field of control.


1957-1961
A major step forward in the direction of formulating optimization problems to obtain adaptive controllers was taken by the development of dynamic programming [73]. The application of such methods to adaptive control is discussed in [74].


1958
The model reference adaptive control was introduced and used to solve the flight control problems ([64], [65]).


1965
Experiments and simulations of model reference adaptive control indicated that there could be problems with instability specifically if the adaptive gain was too large. The stability issues of such systems were first approached using Lyapunov theory ([66]).


1966
In attempts to replace the MIT-rule by other parameter adjustment rules which ensured stability, it was shown in [67] that one could achieve stability if all state variables were measured.


1970’s-80’s
Much work on the self-tuning regulator was carried out in the 1970's and 1980’s, e.g. see [70].


1980
The solution to the flight control problem was given by gain scheduling not by adaptive control ([68]).


1980
Process control was instrumental in the development of a self-tuning regulator which was first proposed in ([69]).


Back to Top
Filtering and Stochastic Control
A comprehensive survey of linear filtering theory can be found in [75].
1950
H. W. Bode and C. E. Shannon proposed the solution to the problem of prediction and smoothing ([76]). A modern account of the solution can be found in [77] and more detailed treatment of the ideas are presented in [78] [79].


1960
R. E. Kalman ([81],[82],[83]) made explicit that an effective solution to the Wiener-Hopf equation using method of spectral factorization ([80]) could be obtained when the continuous process had a rational spectral density.
Stratanovich derived the conditional density equation using the so-called Stratanovich calculus ([111]).


1960-63
The theory of optimal stochastic control in the fully observable case is quite similar to that of non-linear filtering in connection with the linear quadratic stochastic control problem ([79]). Early works in this area are due to Howard ([121]), Florentin ([91]), and Fleming ([122]); See also [123].


1960-64
Inspired by the development of Dynamic Programming by Bellman ([85]) and the ideas of Caratheodory ([86]) related to Hamilton-Jacobi Theory, the development of optimal control of nonlinear dynamical systems took place ([87], [88]), see [89], [84], [90] for further details of the ideas.


1961-1973
The solution to quadratic cost optimal control for linear stochastic dynamical systems was provided by Florentin ([91], [92]), by Joseph in discrete-time ([93]), and by Kushner ([94]). The definitive treatment of the problem was proposed by Wonham ([95]), see also [96].


1962
The partially observable stochastic control problem treated by Florentin ([92]), Davis and Varayia ([125]) and Fleming and Pardouz ([126]). Detailed discussions can be found in [127] and the references therein.


1964
For a good discussion on the distinction between open-loop stochastic control and feedback control see [99].


1965
Non-linear filters are almost always infinite dimensional and there are only a few known examples where the filter is known to be a finite dimension. The Kalman filter is an example and the other finite-state cases are first discussed in [104] [114] and also [112] [113].


1967-79
A difficulty is that one of the fundamental equations of non-linear filtering turns out to be a non-linear stochastic partial differential equation ([79]). Zakai ([105]), Duncan ([106]), and Mortensen ([107]) proposed alternative solutions to the above difficulty which involves a linear stochastic differential equation.


1971
Giransov introduced the idea of measure transformation in stochastic differential equation, see [79], [110], [115] and the references therein for details.


1971-72
The earlier ideas of nonlinear filtering were developed and introduced by Forest and Kailath ([100]), and in definitive form by Fujisaki, Kallianpur, Kunita ([101]).


1976
Bobrovsky and Zakai proposed a method for obtaining lower bounds on the mean-squared error ([119]).


1978
As an attempt to address some of the issues with non-linear filtering, pathwise non-linear filtering was considered where the filter depends continuously on the output ([117], [118]).


1996
The Linear Quadratic Gaussian methodology and optimal non-linear stochastic control have found a wide variety of applications in aerospace, multi-variable control design systems, finance, etc. ([115], [128]).


Back to Top
H-Infinity Control
1970’s
Robust Control aimed to blend the best of the classical methods of the 1940—50’s with the more sophisticated modern theory of the 1960—70’s ([27]).


1976-1981
G. Zames introduced the theory of H-infinity control. He formulated a basic feedback problem as an optimization problem with an operator norm, in particular, an H-infinity norm ([34], [35], [36]).
Related contemporaneous works in the theory of H-infinity control are those of J.W. Helton ([37]) and A. Tannenbaum ([38]).


1981
The input-output mapping for a standard feedback system ([27]) has four transfer functions and it is said to be internally stable if these four transfer functions are all in H-infinity. Internal stability is robust if it is preserved under perturbation of the plant model. J.C. Doyle and G. Stein showed that internal stability is preserved under special perturbations ([39]). This led to the robust stability design problem.
G. Zames postulated that measuring performance in terms of the infinity-norm rather than the traditional 2-norm (LQG) might be much closer to the practical needs. This ushered in the era of H-infinity optimal control. The H-infinity control problem synthesizes a controller which guarantees the stability of the closed-loop and minimizes the L2 induced gain from exogenous inputs to regulated outputs ([27], [33]).


1981-1992
Relations between H-infinity and many other topics can be found in areas like Risk Sensitive Control (P. Whittle) ([56], [57]); Differential Games (T. Bagar, P. Bernhard, M. D. J. Limebeer, B. D. O. Anderson, P. P. Khargonekar, M. Green) ([33], [58], [60]); J-lossless Factorization (M. Green) ([59]); Maximum Entropy methods (H. Dym and I. Gohberg) ([61], [62]).


1982
John C. Doyle argued that model uncertainty is often described very effectively in terms of norm-bounded perturbations. For these perturbations and the H-infinity performance objective he developed a powerful tool, the structured singular value, for testing robust stability ([42], [43]).


1984
John C. Doyle presented the 1st solution to a general MIMO H-infinity optimal control problem ([47]), which relies on state-space methods.
K. Glover tackled the problem of model reduction and presented the solution using Hankel norm of an error, and an explicit algorithm was given for state-space LTI systems ([48]).


1987
B. A. Francis and J. C. Doyle presented a summary of the theory of H-infinity Control in [40].
B. A. Francis gave a detailed treatment of the theory of H-infinity Control in [41]. He developed an operator-theoretic approach to the H-infinity control problem.
B. A. Francis and John C. Doyle gave a modified solution to the general rational MIMO H-infinity optimal problem but suffered from the high order of the Riccati equations ([49], [50]).


1988
D. J. N. Limebeer, G. D. Halikias and Y. S. Hung showed that a subsequent minimal realization of the controller has state dimension no greater than that of the plant. This suggested the likely existence of similarly low dimension optimal controllers in the general two-by-two case ([50], [51], [52]).
Simple state space H-infinity controller formulae were first announced by K. Glover and J. Doyle ([53]) .


1988-1990
P. P. Khargonekar, I. R. Petersen, M. A. Rotea, and K. Zhou showed that for the state feedback H-infinity problem one can choose a constant gain as a sub-optimal controller, and a formula for the state-feedback gain matrix was given in terms of an algebraic Riccati equation. They also established connections between H-infinity optimal control, quadratic stabilization, and linear-quadratic differential games and showed that the state-feedback H-infinity problem can be solved by solving an algebraic Riccati equation and completing the square ([54], [55]).


1989
John C. Doyle, K. Glover, Khargonekar, B. A. Francis developed state-space procedures for solving H-infinity problem ([32]).
K. Glover and D. C. McFarlane introduced the H-infinity loop-shaping design method which provides systematic procedures for obtaining sensible controllers that meet performance objectives and guarantee robustness against model uncertainty and unmeasured disturbances ([27], [29], [31]).
Most of the solution techniques for the H-infinity control problem were in an input-output setting and involved analytic functions (Nevanlinna-Pick interpolation ([27])) or operator theoretic methods ([44], [45], [46]).


Back to Top
Linear Matrix Inequality (LMI)
1890
A. M. Lyapunov published his seminal work known as Lyapunov theory ([1]). This is usually referred to as the first appearance of the Linear Matrix Inequality (LMI) in control theory; analytical solution of the Lyapunov LMI via Lyapunov equation.


1940’s
A. I. Lur’e and V. N. Postnikov were first to apply Lyapunov’s methods to some specific practical problems in control engineering, specifically the problem of stability of a control system with nonlinearity in the actuator. Small LMIs were solved by hand ([2], [3]).


Early 1960’s
R. E. Kalman, V. A. Yakubovich and V. M. Popov managed to reduce the solution of the LMIs that arose in the problem of Lur’e to simple graphical criteria known as Kalman-Yakubovich-Popov (KYP) lemma ([4], [5], [6]).


1964
The KYP lemma resulted in Popov criterion, Circle criterion, Tsypkin criterion ([7], [8]) and many variations.


1962-1965
V. A. Yakubovich published many papers ([6], [9], [10]) highlighting the important role of LMIs in control theory; e.g. The solution of certain inequalities in automatic control theory (1962) and The method of matrix equalities in the stability theory of nonlinear control systems (1965).


Late 1960’s
The Kalman-Yakubovich-Popov (KYP) lemma and extensions were extensively studied and found to be related to the idea of passivity, the small gain criteria introduced by Zames ([11], [12]) and Sandberg ([13], [14], [15]) and quadratic optimal control .


1965
The idea of having a computer search for a Lyapunov function appeared in the literature.


1970
By then, it was known that LMI appearing in the KYP lemma could also be solved by solving a certain algebraic Riccati equation ([16]).


Early 1970’s
B. D. O. Anderson and S. Vongpanitlerd noted the difficulty in solving the LMI directly ([17]).


1971
J. C. Willems in a paper on Quadratic Optimal Control pointed out that an LMI problem could be solved by studying the symmetric solutions of a certain Riccati equation ([19]).


1976
H. P. Horisberger and P. R. Belanger observed that the existence of a quadratic Lyapunov function that simultaneously proves stability of a collection of linear systems is a convex problem involving LMIs ([22]).


1982-1983
E. S. Pyantnitskii and V. I. Skorodinskii were perhaps the first to assert that many LMIs that arise in control and systems theory can be formulated as convex optimization problems which can be reliably solved by computer solutions for which no analytical solution was likely to be found. They were the first to formulate the search for a Lyapunov function as a convex optimization problem, and then apply an algorithm guaranteed to solve the optimization problem ([20], [21]).


1984
N. Karmarkar introduced a new linear programming algorithm that solved linear programs in polynomial-time, like the ellipsoid method, but in contrast to the ellipsoid method, was also very efficient in practice ([24]).


1988
Yu. Nesterov and A. Nemirovski developed interior-point methods that apply directly to convex problems involving matrix inequalities, and in particular, to the problems encountered in control theory ([25]).


Back to Top
Nonlinear Control
1883-1892
Lindstedt ([129]) and Poincare ([130]) tackled the problem of finding a limit cycle solution for some of the second-order nonlinear differential equations.


1892
Poincare ([130]) introduced the phase plane method in studying the second-order nonlinear differential equations. This method became the dominant approach and a valuable tool available to control engineers from the late 1930’s.
Following Poincare, many contributions were made to the field of phase plane topology such as information on singular points and the structure of trajectories near them, and conditions for the existence of limit cycles ([134]).


1915
Stability theory for linear differential equations was established around the work of Poincare but little was done on the general nonlinear differential equations case as Lyapunov's original work was neglected ([135]).
Major research efforts into the effects of nonlinearity in control systems was carried out at MIT as Bush and his colleagues studied nonlinear differential equations considering differential analyzers using mechanical integrators. These efforts included implementation of various designs by Hazen including some using relays being aware of the limitations on the performance due to backlash in gears ([136]).
Motivated by wartime, the need for accurate fire control systems led to significant work on servomechanism in the western world. The phase plane methods and describing function approach were being used to study the nonlinear effects ([136]).


1918
Continuing Duffing's seminal work ([133]), various forms of harmonic balance technique were used to study both free and forced oscillations in the second-order nonlinear differential equations ([176]).


1941
Minorsky ([137]) made a brief reference to nonlinear control problems and the possibility of using Lyapunov’s method.


1949-1958
In studying relay systems, it was realized that the output from a relay, once it had switched, became independent of the input which led Hamel ([170]) and Tsypkin ([171]) to develop techniques for accurate measurement of the limit cycle in such systems. Further details can be found in [168], [169], [172].


1949-1958
For detailed discussions on this topic and latest developments in nonlinear control see [177], [178], [179], [180], [181], [182], [183].


1950’s
Phase plane technique was the main focus in studying the nonlinear differential equations with many papers and books appearing during this year ([138], [139], [140], [141], [142]). However, different nonlinear effects in specific second-order systems were investigated and understood later; nonlinear effects like the effects of torque saturation, nonlinearities in the error channel, backlash, friction, and relay control in second-order systems, optimum control using relays, chattering in relay systems. Detailed coverage of these developments were published, see e.g. [143], [144], [145], [146], [147].
Goldfarb, Dutilh, Oppelt, Kochenburger and Daniell appeared to have independently used the describing function in studying nonlinear differential equations ([148]). The method is identical to a harmonic balance approach, where the first harmonic only is balanced, but was developed in a way more suitable for use in feedback control, in which nonlinear systems were modeled in terms of interconnected blocks of static nonlinear and transfer function elements ([176]).


1954
The problem of examining nonlinear systems with random inputs was first pioneered by Booton ([160]) who approximated the nonlinearity by a linear gain such that the error between the nonlinearity output and that from the linear gain with the same random input (Gaussian) was minimum. Other related materials and contributions can be found in ([161], [162], [164], [165]).


1954-1955
The extension of the describing function theory to determine the stability of any predicted limit cycle ([150]) and to utilize the describing function to determine the forced harmonic response of a nonlinear system ([151], [152]).


1956
In attempts to study the occurrence of nonlinear phenomena in control loops by extending the describing function, particularly servomechanism, West et al. ([156]) realized that the response of nonlinear elements to two harmonic inputs had to be examined.


1956-58
To avoid limit cycles predicted by the describing function method, a common procedure was to change the open-loop dynamics so that no intersection existed between the loci of the system and the specialized describing function on the Nyquist diagram ([176]). Other alternatives either placed nonlinearity in series or parallel with the inherent system or used nonlinear integrators for specific problems ([154], [155]).


1957-58
The incremental describing function was used to assess the stability of a limit cycle ([157], [158]).


1962-1943
Van der Pol ([131], [132]) and Krylov and Bogoliubov ([132]) introduced averaging methods for obtaining solutions to the second-order nonlinear differential equations.


Back to Top
References
[1] M. Lyapunov. Probleme general de la stabilite du mouvement, volume 17 of Annals of Mathematics Studies. Princeton University Press, Princeton, 1947.

[2] I. Lur'e and V. N. Postnikov. On the theory of stability of control systems. Applied mathematics and mechanics, 8(3), 1944. In Russian.

[3] I. Lur'e. Some Nonlinear Problems in the Theory of Automatic Control. H. M. Stationery Off., London, 1957. In Russian, 1951.

[4] R. E. Kalman. Lyapunov functions for the problem of Lur'e in automatic control. Proceedings of National Academy of Sciences., USA, 49:201-205, 1963.

[5] V. M. Popov. Absolute stability of nonlinear systems of automatic control. Automation and Remote Control, 22:857-875, 1962.

[6] V. A. Yakubovich. The solution of certain matrix inequalities in automatic control theory. Soviet Math. Doklady, 3:620-623, 1962. In Russian, 1961.

[7] Ya. A. Tsypkin. Frequency criteria for the absolute stability of nonlinear sampled data systems. Automatic Remote Control, 25:261-267, 1964.

[8] Ya. Z. Tsypkin. A criterion for absolute stability of automatic pulse systems with monotonic characteristics of the nonlinear element. Soviet PhY8. Doklady, 9:263-266, 1964.

[9] V. A. Yakubovich. Solution of certain matrix inequalities encountered in nonlinear control theory. Soviet Math. Dokl., 5:652-656, 1964.

[10] V. A. Yakubovich. The method of matrix inequalities in the stability theory of nonlinear control systems, I, II, Ill. Automation and Remote Control, 25-26(4):905-917,577-592,753-763, April 1967.

[11] G. Zames. On the input-output stability of time-varying nonlinear feedback systems-Part I: Conditions derived using concepts of loop gain, conicity, and positivity. IEEE Transactions on Automatic Control, AC-11:228-238, April 1966.

[12] G. Zames. On the input-output stability of time-varying nonlinear feedback systems-Part II: Conditions involving circles in the frequency plane and sector nonlinearities. IEEE Transactions on Automatic Control, AC-11:465-476, July 1966.

[13] I. W. Sandberg. A frequency-domain condition for the stability of feedback systems containing a single time-varying nonlinear element. Bell Systems Technology Journal, 43(3):1601-1608, July 1964.

[14] W. Sandberg. On the boundedness of solutions of non-linear integral equations. Bell Systems Technology Journal, 44:439-453, 1965.

[15] W. Sandberg. Some results in the theory of physical systems governed by nonlinear functional equations. Bell Systems Technology Journal, 44:871-898, 1965.

[16] S. Boyd and E. Feron and V. Balakrishnan and L. El Ghaoui. History of linear matrix inequalities in control theory. In Proceedings of American Control Conference, pp. 31-34, 1994.

[17] D. O. Anderson and S. Vongpanitlerd. Network analysis and synthesis: a modern systems theory approach. Prentice-Hall, 1973.

[18] J. C. Willems. Least squares stationary optimal control and the algebraic Riccati equation. IEEE Transactions on Automatic Control, AC-16(6):621-634, December 1971.

[19] V. A. Yakubovich. Dichotomy and absolute stability of nonlinear systems with periodically non-stationary linear part. Systems and Control Letters, 1988.

[20] E. S. Pyatnitskii and V. I. Skorodinskii. Numerical methods of Lyapunov function construction and their application to the absolute stability problem. Systems and Control Letters, 2(2):130-135, August 1982.

[21] E. S. Pyatnitskii and V. I. Skorodinskii. Numerical method of construction of Lyapunov functions and absolute stability criteria in the form of numerical procedures. Automation and Remote Control 44(11):1427-1437, 1983.

[22] H. P. Horisberger and P. R. Belanger. Regulators for linear, time invariant plants with uncertain parameters. IEEE Transactions on Automatic Control, AC-21:705-708,1976.

[23] G. Schultz, F. T. Smith, H. C. Hsieh, and C. D. Johnson. The generation of Lyapunov functions. In C. T. Leonde, editor, Advances in Control Systems, volume 2, pages 1-64. Academic Press, New York, 1965.

[24] N. Karmarkar. A new polynomial-time algorithm for linear programming. Combinatorica, 4(4):373-395, 1984.

[25] Yu. Nesterov and A. Nemirovsky. Interior point polynomial methods in convex programming: Theory and application. SIAM, 1993.

[26] F. L. Lewis, Introduction to Modern Control Theory, Prentice-Hall, 1992.

[27] K. Zhou, J. C. Doyle and K. Glover, Robust and Optimal Control, Prentice-Hall, 1996.

[28] J. C. Doyle, B. A. Francis and A. R. Tannenbaum, Feedback Control Theory, Macmillan, 1992.

[29] K. Glover, D. McFarlane, "Robust stabilization of normalized coprime factor plant descriptions with H-infinity bounded uncertainty", IEEE Transactions on Automatic Control, 34 (1989), 821-830.

[30] McFarlane, D.C., and K. Glover, "A Loop Shaping Design Procedure using Synthesis," IEEE Transactions on Automatic Control, vol. 37, no. 6, pp. 759- 769, June 1992.

[31] D. C. McFarlane and K. Glover, Robust Controller Design Using Normalized Coprime Factor Plant Descriptions, Lecture Notes in Control and Information Science, No. 138, Springer-Verlag, Berlin, 1989.

[32] J. Doyle, K. Glover, P. Khargonekar, and B. Francis, "State-space solutions to standard H2 and H-infinty control problems," IEEE Transactions on Automatic Control, vol. 34, pp. 831--846, Aug. 1989.

[33] M. Green and D. J. N. Limebeer, Linear Robust Control. Englewood Cliffs, NJ: Prentice-Hall, 1995.

[34] G. Zames, "Feedback and complexity, Special plenary lecture addendum" , IEEE Conf. Decision Control , 1976.

[35] G. Zames, "Optimal sensitivity and feedback: weighted seminorms, approximate inverses, and plant invariant schemes" , Proc. Allerton Conf. , 1979.

[36] G. Zames, "Feedback and optimal sensitivity: model reference transformations, multiplicative seminorms, and approximate inverses" IEEE Transactions on Automatic Control, AC-26, 1981, pp. 301–320.

[37] J. W. Helton, "Operator theory and broadband matching", Proc. Allerton Conf. , IEEE,1979.

[38] A. Tannenbaum, "On the blending problem and parameter uncertainty in control theory", Technical Report Dept. Math. Weizmann Institute, 1977.

[39] J. C. Doyle, G. Stein, "Multivariable feedback design: concepts for a classical modern synthesis", IEEE Transactions on Automatic Control , AC-26, 1981, pp. 4–16.

[40] B. A. Francis, J. C. Doyle, "Linear control theory with an H-infinity optimality criterion", SIAM J. Control and Opt., 25, 1987, pp. 815–844.

[41] B.A. Francis, "A course in H-infinity control theory", Lecture Notes in Control and Information Science, 88, Springer, 1987.

[42] Packard, A.K., M. Fan and J. Doyle, "A power method for the structured singular value," Proc. of IEEE Conference on Control and Decision, December 1988, pp. 2132-2137.

[43] J. Doyle. Analysis of feedback systems with structured uncertainties. Proc. IEE, 129:242-- 250, 1982.

[44] D. Sarason, Generalized Interpolation in H-infinity, Transactions of the American. Math. Society 127 (1967), 179-203.

[45] V.M. Adamjan, D.Z. Arov, and M.G. Krein, Infinite block Hankel matrices and related extension problems, Transactions of the American Math. Society, 111: 133-156, 1978.

[46] J. A. Ball and J. W. Helton. A Beurling-Lax theorem for the Lie group U(m,n) which contains most classical interpolation theory. Journal of Operator Theory, 9:107-142, 1983.

[47] J. C., Doyle, Lecture Notes in Advances in Multivariable Control, ONR/Honeywell Workshop, Minneapolis, MN, 1984.

[48] K. Glover. All optimal Hankel-norm approximations of linear multivariable systems and their L-infinity error bounds. International Journal of Control, 39:1115-1193, 1984.

[49] B. A. Francis, A course in H-infinity control theory, Lecture Notes in Control and Information Sciences, vol. 88, 1987.

[50] B. A. Francis and J. C. Doyle, “Linear control theory with an H-infinity optimality criterion”, SIAM J. Control Opt., vol. 25, pp. 815-844, 1987.

[51] D. J. N. Limebeer and G.D. Halikias. “A controller degree bound for U,-optimal control problems of the second kind,” SIAM J. Control Opt., vol. 26, no. 3, pp. 646-677, 1988.

[52] D. J. N. Limebeer and Y.S. Hung (1987). “An analysis of pole-zero cancellations in U,-optimal control problems of the first kind,” SIAM J. Control Opt., 25, pp. 1457-1493.

[53] K. Glover and J. Doyle. “State-space formulae for all stabilizing controllers that satisfy an H-infinity norm bound and relations to risk sensitivity,” Systems and Control Letters, vol. 11, pp. 167-172, (1988).

[54] P. P. Khargonekar, I. R. Petersen, and M. A. Rotea, “H-infinity optimal control with state feedback,” IEEE Transactions on Automatic Control, vol. AC-33, 1988.

[55] P. P. Khargonekar, I. R. Petersen, and K. Zhou. “Robust stabilization and H-infinity optimal control,” IEEE Trans. Auto. Contr., Vol. 35, No. 3, pp. 356-361, 1990.

[56] P. Whittle, "Risk-sensitive Linear/Quadratic/Gaussian control", Adv. Appl. Prob., 13 (1981), pp. 764--777.

[57] P. Whittle, "A risk-sensitive maximum principle," Systems and Control Letters, vol. 15, pp. 183--192, 1990.

[58] T. Basar and. P. Bernhard. 'H-infinity optimal control and. related minimax design problems: A dynamic game approach. Birkhaiiser, Berlin, 1991.

[59] M. Green, H-infinity controller synthesis by J-lossless coprime factorization, SIAM Journal on Control and Opt., Vol. 30, pp. 522-547, 1992.

[60] Limebeer, D. J., Anderson, B. D., Khargonekar, P. P., and Green, M. 1992. A game theoretic approach to H-infinity control for time-varying systems. SIAM J. Control Optim. 30, 2 (1992).

[61] H. Dym and I. Gohberg, “A maximum entropy principle for contractive interpolants”, J. Functional Analysis, 65, (1986), pp. 83–125.

[62] D. Mustafa and K. Glover, Minimum entropy H-infinity control, Lecture Notes in Control and Information Sciences, Springer-Verlag, 1990.

[63] J. A. Aseltine, A. R. Mancini, and C. W. Sarture. “A. survey of adaptive control systems.” IRE. Transaction on Automatic Control, PGAC-3, pp. 102—108, 1958.

[64] H. P. Whitaker, An Adaptive System for Control of the Dynamics Performances of Aircraft and Spacecraft, Inst Aeronautical Services, Paper 59-100, 1959.

[65] H.P. Whitaker, J. Yamron, and A. Kezer, "Design of model-reference adaptive control systems for aircraft", Report R-16, Instrumentation Lab., MIT, 1958.

[66] R. L. Butchart and B. Shackcloth, Synthesis of Model Reference Adaptive Systems by Lyapunov's Second Method, Proc. IFAC Symposium on Adaptive Control (1965), 145—152.

[67] P. C. Parks, "Lyapunov redesign of model reference adaptive control systems", IEEE Transactions on Automatic Control, vol. 11, pp. 362--367, 1966.

[68] G. Stein, Adaptive Flight Control: A Pragmatic View, Applications of Adaptive Control (K.S. Narendra and R.V. Monopoli, eds), New York: Academic Press, 1980.

[69] R. E. Kalman. Design of a Self Optimizing Control System. Transactions of ASME, pages 468--478, January, 1958.

[70] K. J. Astrom and B. Wittenmark, Adaptive control (2nd Ed.), Addison-Wesley, 1995.

[71] C. S. Draper and Y. J. Li. Principles of optimalizing control systems and an application to an internal combustion engine. ASME Publications, September 1951.

[72] H. A. Simon, Dynamic programming under uncertainty with a quadratic criterion function, Econometrica, 24:74-81 1956.

[73] R. Bellman. Dynamic Programming. Princeton, NJ, Princeton University Press, 1957.

[74] R. Bellman. Adaptive Control Processes: A Guided Tour. Princeton University Press, 1961.

[75] T. Kailath. A View of Three Decades of Linear Filtering Theory, IEEE Transactions on Information Theory, IT-20, pp.146-181, No.2, March 1974.

[76] H. W. Bode and C. E. Shannon, A Simplified Derivation of Linear Least Square Smoothing and Prediction Theory, Proc. IRE, Vol. 38, pp. 417-425, April 1950.

[77] T. Kailath. An Innovations Approach to Least Squares Estimation -Part I: Linear Filtering in Additive White Noise, IEEE Transactions on Automatic Control. Vol. AC-13, pp. 646-655, December 1968.

[78] M. H. A. Davis, Linear Estimation and Stochastic Control, Chapman Hall, London, I977.

[79] E. Wong, Stochastic Processes in Information and Dynamical Systems, McGraw Hill, New York, 1971.

[80] D. C. Youla, On the Factorization of Rational Matrices, IRE Transactions on Information Theory, IT-7, pp. 172-189, 1961.

[81] R. E. Kalman, A New Approach to Linear Filtering and Prediction Problems, ASME Transactions, Part D (Journal of Basic Engineering), 82, pp. 35-45, 1960.

[82] R, E. Kalman and R. S. Bucy, New Results in Lineal Filtering and Prediction Theory, ASME Trans., Part D (Journal of Basic Engineering), pp, 95-108. 1961.

[83] R. E. Kalman, New Methods of Wiener Filtering Theory, in Proc. 1st Symposium, Engineering Applications of Random Function Theory and Probability, J. L. Bogdanoff and F. Korin, Eds., New York, Wiley, pp. 270-385, 1963.

[84] W. M. Wonham. Linear Multivariable Control, Springer-Verlag, New York 1985.

[85] R. E. Bellman, Dynamic Programming, Princeton Univ. Press, Princeton, N.J., 1957.

[86] C. Caratheodory, Variationsrechnung und Partielle Diffel'ential- gleichungen Erster Ordnung, Teubner, Leipzig, 1935.

[87] C. W. Merriam, Optimization Theory and the Design of Feedback Control Systems, McGraw Hill, New York, 1964.

[88] R. E. Kalman, Contributions to the Theory of Optimal Control, Bol. Soc. Mat. Mex., Vol. 5, pp. 102-119, 1960.

[89] R. W. Brockett, Finite Dimensional Linear Systems, New York, Wiley, 1970.

[90] A. E. Bryson and Y. C. Ho, Applied Optimal Control, New York: Hemisphere Publishing Company, 1969 (first edition), 1979 (2nd edition).

[91] J. J. Florentin, Optimal Control of Continuous Time, Markov, Stochastic Systems, J. of Electronics and Control, 10, pp. 473-488, 1961.

[92] J. J. Florentin, Partial Observability and Optimal Control, J. of Electronics and Control, Vol. 13, pp. 263-279, 1962.

[93] P. D. Joseph and T. T. Tau, On Linear Control Theory, AlEE Transactions, 80 (ll), pp. 193-196.

[94] H. J. Kushner, Optimal Stochastic Control, IRE Transactions on Automatic Control, pp. 120-122, October 1962.

[95] W. M. Wonham, Random Differential Equations in Control Theory, Probabilistic Methods in Applied Mathematics, Vol. 2, A. T. Bharucha-Reid, ed., Academic Press, New York, pp. 131-212, 1970.

[96] A. Lindquist, On Feedback Control of Linear Stochastic Systems, SIAM J. Control, Vol. II, no. 2, pp. 323-343,1973.

[97] H. S. Witsenhausen, Separation of Estimation and Control for Discrete- time Systems, Proc. IEEE, 59, pp. 1557-1566, 1971.

[98] C. C. Holt, F. Modiglani, J. F. Muth and H. A. Simon. Planning Production, Inventories and Work Force, Prentice Hall, 1960.

[99] S. E. Dreyfus, Some Types of Optimal Control of Stochastic Systems, SlAM J. Control, Vol. 2, no. 1, pp.120-134, 1964.

[100] P. Frost and T. Kailath, An Innovations Approach to Least Squares Estimation –Part III: Nonlinear Estimation in White Gaussian Noise, IEEE Transactions on Automatic Control, AC-16, pp. 217-226, June 1971.

[101] M. Fujisaki, G. Kallianpur, H. Kunita, Stochastic Differential Equations for the Nonlinear Filtering Problem, Osaka J. of Math., 9, pp. 19-40, 1972.

[102] D. F. Allinger and S. K. Miner, New Results on the Innovations Problem of Nonlinear Filtering, Stochastics. 4, pp. 339-348, 1981.

[103] H. J. Kushner, On Differential Equations satisfied by Conditional Probability Densities of Markov Processes, SIAM J. Control, Vol. 2. pp, 106-119, 1964.

[104] W. M. Wonham, Some Applications of Stochastic Differential Equations to Optimal Nonlinear Filtering, SIAM J. Control, 2, pp. 347-369, 1965.

[105] M. Zakai, On the Optimal Filtering of Diffusion Processes, Z Wahr Verw Geb., 11, pp. 230-243, 1969.

[106] T. E. Duncan, Probability Densities for Diffusion Processes with Applications to Nonlinear Filtering Theory and Detection Theory, Ph.D. dissertation, Stanford University, 1967.

[107] R. E. Mortenson. Doctoral Dissertation, Univ. of California, Berkeley, 1967.

[108] S. K. Mitter. On the Analogy Between Mathematical Problems of Nonlinear Filtering and Quantum Physics, Ricerche di Automatice, Vol. 10, no. 2, pp. 163-216, 1979.

[109] M. H. A. Davis and S. J. Marcus, An Introduction to Nonlinear Filtering, Stochastic Systems: The Mathematics of Nonlinear Filtering and Identification and Applications, eds. M. Hazewinkel and J. C. Willems, Reidel, Dordrecht, 1981.

[110] R. S. Liptser and A. N. Shiryayer, Statistics of Random Processes I, General Theory, Springer-Verlag, New York, 1977.

[111] R. L. Stratonovich, Conditional Markov Process Theory, Theory Prob. Appl. (USSR), Vol. 5, pp.156-178, 1960.

[112] R. W. Brockett, Remarks on Finite Dimensional Nonlinear Estimation, Analyse des Systemes, Asterisque, 75-76, 1980.

[113] S. K. Mitter, Filtering Theory and Quantum Fields, Asterisque 75-76 Analyse des Systems, Bordeaux, September 11-16, 1978. Societe Mathematique de France, 1980, pp. 199-205.

[114] V. E. Benes, Exact Finite Dimensional Filters for Certain Diffusions with Nonlinear Drift, Stochastics, 5, pp. 65-92, 1981.

[115] S. K. Mitter, Filtering and stochastic control: a historical perspective, Control Systems Magazine, IEEE, vol.16, no.3, pp.67-76, Jun 1996.

[116] M. Hazewinkel, S. I. Marcus, H. J. Sussman, Nonexistence of Exact Finite Dimensional Filters for Conditional Statistics of the Cubic Sensor Problem, Systems and Control Letters, 5, pp. 331-340, 1983.

[117] J. M. C. Clark, The Design of Robust Approximations to the Stochastic Differential Equations of Nonlinear Filtering. In Communication Systems and Random Process Theory, J. Skwirzynski ed. Alphen aan den Rijn, The Netherlands:, Sijthoff and Noorhoff, 1978, pp. 721-734.

[118] M. H. A. Davis, On a Multiplicative Transformation Arising in Non-linear Filtering, Z. Wahrschein-Verw. Geb., 54, pp. 125-139, 1981.

[119] B. Z. Bobrovsky and M. Zakai, A Lower Bound on the Estimation Error for Certain Diffusion Processes, IEEE Transactions on Information Theory, IT-22, pp. 45-52, 1976.

[120] E. Pardoux, Filtrage Non Lineaire at Equations Aux Derivees Partielles Stochastique Associees, Ecole d' Ete de Probabilites de Saint-Flour XIX, ed. P. L. Hennequin, Spring Lecture Notes in Mathematics 1464, 1991.

[121] R. A. Howard, Dynamic Programming and Markov Processes, Wiley, New York, 1960.

[122] W. H. Fleming, Some Markovian Optimization Problems, J. Math. and Mech. 12 (l)m pp. 131-140, 1963.

[123] W. H. Fleming and R. W. Rishel, Deterministic and Stochastic Optimal Control, Springer-Verlag, New York, 1975. [50] P. Whittle, Risk Sensitive Optimal Control, Wiley, New York, 1990.

[124] A. Bensoussan and J. H. Van Schuppen, Optimal Control of Partially Observable Stochastic Systems with an Exponential of Integral Performance Index, SIAM J. on Control and Optimization, 23 (4), 1985.

[125] M. H. A. Davis and P. P. Varaiya, Dynamic Programming Conditions for Partially Observable Stochastic Systems, SIAM J. Cont. and Opt., 26, pp. 226-261, 1973.

[126] W. H. Fleming and E. Pardoux, Optimal Control for Partially Observed Diffusions, SIAM J. on Control and Optimization, 2O, pp. 261-85, 1981.

[127] V. Borkar, Optimal Control of Diffusion Processes, Longman, 1989.

[128] R. Merton, Continuous Time Finance, Oxford: B. Blackwell, Cambridge, MA, 1992.

[129] A. Lindstedt, "Differentialgleichungen der SWrungs.theorie," Mem. Acad. Imp. St. Petersburg, 31, 1883.

[130] H. Poincare, Les Mithodes Nouvelles de La Michanique Citeste, vol. I, Gauthier. Villars, Paris, 1892.

[131] B. Van der Pol, "On Relaxation Oscillations," Philos. Mag. 7(2), pp, 978, 1926.

[132] U. Krylov and N. Bogoliubov, Introduction to Nonlinear Mechanics, Princeton University Press, Princeton, NJ, l943.

[133] G. Duffing, "Erzwungene Schwingungen bei Verlanderlicher Eigenfre. quenz," Weweg, Braunschweig, 1918.

[134] D. P. Atherton, "Nonlinear Systems: History," Encyclopedia of Systems and Control, Pergamon Press, pp. 3,383-3,390, 1987.

[135] A. M. Lyapunov, Obshchaya Zadacha ob Ustoichillosti Dvizheniya (Gen- eral Problem of the Stability o/Motion), Gostekhizdat, Moscow, 1915.

[136] S. Bennett, A History of Control Engineering 1930-1955, lEE Control Engineering Series No. 47, Peter Peregrinus, London, UK, 1993.

[137] N. Minorsky, "Control Problems," J. Franklin Institute, Nov. Dec., pp. 451-487, pp. 519-551, 1941.

[138] J. C. West, J. L. Douce, and R. Naylor, 'The Effects of Some Nonlinear Elements on the Transient Performance of a Simple R.P.C. System Possessing Torque Limitation," Proc. J.E.E., 101, pp. 156-165, 1954.

[139] R.E. Kalman, "Phase Plane Analysis of Automatic Control Systems with Nonlinear Gain Elements," Trans. A.J.E.E., 73(11), pp. 383-390, 1954.

[140] T. J. Higgins, “A Resume of the Development and Literature of Nonlinear Control System Theory”, Trans. A.S.M.E., 79, pp. 445-449, 1957.

[141] W. H. Pell, "Graphical Solution of Single Degree of Freedom Vibration Problem with Arbitrary Damping and Restoring Forces," Trans. A.S.M.E., 79, 1957, pp. 311-312.

[142] L. S. Jacobsen, "On a General Method of Solving Second Order Differential Equations by Phase Plane Displacements," Trans. A.S.M.E., 74, 1952, pp. 543-553.

[143] A. A. Andronov and CoB. Chaikin, Theory of Oscillations, Princeton University Press, NJ, 1949.

[144] I. Flugge-Lotz, Discontinuous Automatic Control, Princeton University Press, Princeton, NJ, 1953.

[145] Y. H. Ku, Analysis and Control of Nonlinear Systems, Ronald, New York, 1958.

[146] W. J. Cunningham, "Introduction to Nonlinear Analysis," McGraw-Hill, New York, 1958, pp. 36-39.

[147] J. C. West, Analytical Techniques for Nonlinear Control Systems, E.U.P., London, 1960, Chapter 6.

[148] D.P. Atherton, Nonlinear Control Engineering, Van Nostrand Reinhold Co. Ltd., Berks, UK, 1975.

[149] M. A. Aizerman. Lectures on the Theory of Automatic Regulation. Fitzmatgiz, 1958, Moscow.

[150] J. M. Loeb, “Recent Advances in Nonlinear Servo Theory”, in Oldenburger R, ed., Frequency Response. Macmillan, New York, pp. 260-68, 1956.

[151] J. C. West and J. L. Douce, "The Frequency Response of a Certain Class of Nonlinear Feedback Systems," Br. J. Appl. Phys., 5. 210-10, 1954.

[152] L. T. Prince, "A Generalized Method for Determining the Closed Loop Frequency Response of Nonlinear Systems," Trans A.J.E.E., 73, pt. II, pp. 217-224, 1954.

[153] P. E. W. Grensted, 'The Frequency-Response Analysis of Non-Linear Systems." Proc J. E.E. , 1955. 102C, pp. 244-253.

[154] J. C. West and M. J. Somerville, "Integral Control with Torque Limitation," Proc J.E.E., 1956,103, p. 407.

[155] J. C. Clegg, “A Nonlinear Integrator for Servomechanisms,” Trans A.J.E.E., 1958, pt. II, 77. pp. 41-42.

[156] J. C. West, J. L. Douce, and R. K. Livesley, “The Dual Input Describing Function and Its Use in the Analysis of Nonlinear Feedback Systems,” Proc. Inst. Elec. Eng., Part B 103. pp. 463-74, 1956.

[157] R. Oldenburger, "Signal Stabilization of a Control System," Trans. ASME. 79, pp. 1869-72, 1957.

[158] Z. Bonenn, "Stability of Forced Oscillations in Nonlinear Feedback Systems." J.R.E. Trans., 1958, AC-6, pp. 109-111.

[159] N. Weiner, "Extrapolation. Interpolation, and Smoothing of Stationary time Series with Engineering Applications," Wiley, New York, 1949.

[160] R. C. Booton, "Nonlinear Control Systems with Random Inputs," IRE Trans. CT-1, pp. 9-17.1954.

[161] J. F. Barrett and J. F. Coates, "An Introduction to the Analysis of Non-Linear Control Systems with Random Inputs," Proc. J.E.E., 1955, l03C, pp. 190-199.

[162] P. N. Nikiforuk and J. C. West, "The Describing Function Analysis of a Nonlinear Servomechanism Subject to Stochastic Signals and Noise," Proc. I.E.E.. 1957. 104C, pp.193-203.

[163] A. H. Nuttall, “Theory and Application of the Separable Class of Random Processes,” M.I.T. Res. Lab. Electronics, 1958, Rept. 343.

[164] J. L. Brown. "On a Cross Correlation Property for Stationary Random Processes," Trans. J.R.E., 1957, IT-3, pp. 28-31.

[165] M. J. Somerville and D. P. Atherton. "Multi-Gain Representation for a Single-Valued Nonlinearity with Several Inputs, and the Evaluation of their Equivalent Gains by a Cursor Method," Proceedings, IEE. 105C. pp. 537-549, 1958.

[166] R. Oldenburger and R. Sridhar, "Signal Stabilization of a Control System with Random Inputs," Trans. Am. Inst. Elec. Eng., Part 2, 80, pp. 260-671961.

[167] Y. Sawaragi and S. Takahashi, "Response of Control Systems Containing Zero-Memory Non-Linearity to Sinusoidal and Gaussian Inputs," Proc. Heidelberg Conf. Automatic Control, International Federation of Automatic Control, Laxenburg, pp. 271-74, 1956.

[168] A. Gelb and W. E. Van der Velde, "Multiple Input Describing Functions and Nonlinear Systems Design." McGraw-Hill, New York, 1968.

[169] D. P. Atherton, "Nonlinear Control Engineering: Describing Function Analysis and Design," Van Nostrand Reinhold. London, 627 pp., September 1975; or D.P. Atherton, "Nonlinear Control Engineering," student edition, Van Nostrand, Reinhold, 470 pp., 1982.

[170] B. Hamel, "Contribution a I'Etude Mathematique des Systemes de Reglage par Tout-ou-rien." CEMV, Service Technique Aeronautique, 17, 1949.

[171] J. A. Tsypkin, Theorie der Relais Systeme der Automatischen Regelung, R. Oldenbourg-Verlag, Munich, 1958.

[172] P. A. Cook, Nonlinear Dynamical Systems, Prentice-Hall, 1986.

[173] F. C. Williams and F. J. U. Ritson, "Electronic Servo Simulators." JIEE, 94(IIA), pp.112-24, 1947.

[174] F. R. I. Spearman et al., “TRIDAC: A Large Analogue Computing Machine,” IEE monograph, Paper No. 1899M, October 1955.

[175] J. F. Coales et al., eds., "Theory of Non-Linear Control," Butterworths, London, 1963.

[176] D. P. Atherton, "Early developments in nonlinear control," Control Systems Magazine, IEEE , vol.16, no.3, pp.34-43, June 1996.

[177] M. Vidyasagar, Nonlinear systems analysis, Prentice-Hall, 1978.

[178] S. Sastry, Nonlinear Systems, Springer, 1999.

[179] H. K. Khalil, Nonlinear Systems, Prentice Hall; 3rd Edition, 2001.

[180] H. J. Marquez, Nonlinear Control Systems: Analysis and Design, Wiley-Interscience, 2003.

[181] A. Isidori, Nonlinear Control Systems, Springer; 3rd edition, 1995.

[182] J. J. Slotine and W. Li, Applied Nonlinear Control, Pearson Education, 1990.

[183] P. D. Christofides, Nonlinear and Robust Control of PDE Systems, Birkhauser, 2001.

[184] D. Youla, J. Jr. Bongiorno and H. Jabr, "Modern Wiener-Hopf design of optimal controllers Part I: The single-input-output case," IEEE Transactions on Automatic Control, vol.21, no.1, pp. 3-13, Feb 1976.

[185] D. Youla, H. Jabr and J. Jr. Bongiorno, "Modern Wiener-Hopf design of optimal controllers--Part II: The multivariable case," IEEE Transactions on Automatic Control, vol.21, no.3, pp. 319-338, Jun 1976.

[186] M. J. Grimble, M. A. Johnson, "H-infinity robust control design-a tutorial review," Computing & Control Engineering Journal, vol.2, no.6, pp.275-282, Nov 1991.

[187] M.G. Safonov, A. J. Laub, and G.L. Hartmann, "Feedback Properties of Multivariable Systems: The Role and Use of the Return Difference Matrix," IEEE Transactions on Automatic Control, vol. 26, no. 1, pp.47-65, 1981.

[188] J.C. Doyle and G. Stein, "Multivariable Feedback Design: Concepts for a Classical/Modern Synthesis," IEEE Transactions on Automatic Control, vol. AC-26, pp. 4-16, Feb. 1981.

[189] A. G. J. MacFarlane, and I. Postlethwaite, "The Generalized Nyquist Stability Criterion and Multivariable Root Loci," International Journal of Control, vol. 25, pp. 81-127, 1977.

[190] H. H. Rosenbrock, Computer-Aided Control System Design, New York: Academic Press, 1974.

[191] A. Gelb, Applied Optimal Estimation, MIT Press, 1994.

[192] K. J. Åström, Introduction to Stochastic Control Theory, New York: Academic Press, 1970.

[193] E. J. Routh, A Treatise on the Stability of a given State of Motion, London, Macmillan & Co., 1877.

[194] I. A. Vyshnegradsky, "On Controllers of Direct Action," Izv. SPB Tekhnolog. Inst., 1877.

[195] A. Hurwitz, "On the Conditions Under Which an Equation Has Only Roots With Negative Real Parts," Mathematische Annalen, vol. 46, pp. 273-284, 1895.

[196] H. S. Black, "Stabilized Feedback Amplifiers," Bell Systems Technical Journal, 1934.

[197] H. Nyquist, "Regeneration Theory," Bell Systems Technical Journal, 1932.

[198] H. W. Bode, "Feedback Amplifier Design," Bell Systems Technical Journal, vol. 19, p. 42, 1940.

[199] N. Minorsky, "Directional Stability and Automatically Steered Bodies," J. Am. Soc. Nav. Eng., vol. 34, p. 280, 1922.

[200] A.C. Hall, "Application of Circuit Theory to the Design of Servomechanisms," J. Franklin Inst., 1966.

[201] H. M. James, N.B. Nichols, and R.S. Phillips, Theory of Servomechanisms, New York: McGraw-Hill, M.I.T. Radiation Lab. Series, Vol. 25, 1947.

[202] W. R. Evans, "Graphical Analysis of Control Systems," Trans. AIEE, vol. 67, pp. 547-551, 1948.

[203] A. N. Kolmogorov, "Interpolation and Extrapolation von Stationaren Zufalligen Folgen," Bull. Acad. Sci. USSR, Ser. Math. vol. 5, pp. 3-14, 1941.

[204] L. A. MacColl, Fundamental Theory of Servomechanisms, New York: Van Nostrand, 1945.

[205] H. Lauer, R. N. Lesnick, and L. E. Matdon, Servomechanism Fundamentals, New York: McGraw-Hill 1947.

[206] G. S. Brown and D. P. Campbell, Principles of Servomechanisms, New York: Wiley, 1948.

[207] H. Chestnut and R. W. Mayer, Servomechanisms and Regulating System Design, vol. 1, 1951, vol. 2, 1955, Wiley.

[208] J. G. Truxal, Automatic Feedback Control System Synthesis, New York: McGraw-Hill, 1955.

[209] R. Bellman, Dynamic Programming, New Jersey: Princeton Univ. Press, 1957.

[210] L. S. Pontryagin, V. G. Boltyansky, R. V. Gamkrelidze, and E. F. Mishchenko, The Mathematical Theory of Optimal Processes, New York: Wiley, 1962.

[211] R. E. Kalman, and J.E. Bertram, "Control System Analysis and Design via the 'Second Method' of Lyapunov. I. Continuous-time Systems," Trans. ASME J. Basic Eng., pp. 371-393, June 1960.

[212] K. S. Narendra and R.M. Goldwyn: "A Geometrical Criterion for the Stability of Certain Nonlinear Nonautonomous Systems," IEEE Trans. Circuit Theory, vol. CT-11, no. 3, pp. 406-407, 1964.

[213] C. A. Desoer, "A Generalization of the Popov Criterion," IEEE Transactions on Automatic Control, vol. AC-10, no. 2, pp. 182-185, 1965.

[214] J. R. Ragazzini and G.F. Franklin, Sampled-Data Control Systems, New York: McGraw-Hill, 1958.

[215] J. R. Ragazzini and L.A. Zadeh, "The Analysis of Sampled-Data Systems," Trans. AIEE, vol. 71, part II, pp. 225-234, 1952.

[216] E. I. Jury, "Recent Advances in the Field of Sampled-Data and Digital Control Systems," Proc. Conf. Int. Federation Automat. Control, pp. 240-246, Moscow, 1960.

[217] Benjamin C. Kuo, Analysis and Synthesis of Sampled-Data Control Systems, New Jersey: Prentice-Hall, 1963.

[218] K. J. Åström, Introduction to Stochastic Control Theory, New York: Academic Press, 1970.

[219] K. J. Åström and B. Wittenmark, Computer-Controlled Systems: Theory and Design, New Jersey: Prentice-Hall, 1984.

Back to Top

0 komentar:


Posting Komentar

 

Guest Info