A PID controller is a complex but powerful operand. It is used to reduce "error" values in control systems, driving the measured variable towards a specified setpoint.
|>feedback||The current value||decimal|
|>setpoint||The desired value||decimal|
|>kp||The proportional control part||decimal|
|>ki||The integral control part||decimal|
|>kd||The derivative control part||decimal|
|>reset||Resets the integral||decimal|
|<control||The calculated adjustment||decimal|
(Contains excerpts from the wikipedia article)
The controller calculates an error value as the difference between a measured process variable (>feedback) and a desired value (>setpoint) and then attempts to minimize this error by adjusting the inputs to the process (<control).
This calculation involves three separate parameters, which can be interpreted in terms of time:
- P - proportional - the present error
- I - integral - the accumulation of past errors
- D - derivative - a prediction of future errors, based on current rate of change
The weighted sum of these three actions is used to adjust the process via a control element such as the lift of a hover on a hovering bot.
Here is a simple software loop that implements a PID algorithm:
previous_error = 0 integral = 0 start: error = setpoint - feedback 'calculate the error integral = integral + error*dt 'in/decrease the integral; Based on the error and passed time derivative = (error - previous_error)/dt 'deduce the rate of change control = Kp*error + Ki*integral + Kd*derivative previous_error = error wait(dt) goto start
A common use for controllers in Rawbots would be to control the altitude of hovers. In this case, there would two altitude values: one representing the current altitude of a hover (measured by an altimeter) and another representing its desired altitude (which could be changed with a combination of an input sampler and sample mapper). The current altitude would be the >feedback, the desired altitude the >setpoint, and the hover's >lift the target for the <control.
In theory, a controller can be used to control any process which has a measurable process value (>feedback), a known ideal value for it (>setpoint) and an input to the process (<control) that will affect it. Controllers are used in industry to regulate temperature, pressure, flow rate, chemical composition, speed and practically every other variable for which a measurement exists. In Rawbots, the most common use is to balance hovering bots with the aforementioned altitude control system.
After calculating the error (the difference between the setpoint altitude and the feedback altitude), each controller decides how much to change its output (the lift of the hover) in up to three different ways.
When the feedback value falls below the setpoint value, the controller can increase the output in response to the error between the setpoint and the feedback. This is proportional control, because the response is in proportion to the error at the current moment in time.
If there continues to be an error for some length of time, the controller can gradually increase the output, attempting to reduce the error to zero. This is called integral control, because the output corresponds to the integral of the errror (the sum of the error over time). The longer the error exists and the greater it is, the more this will increase the output.
If a sudden change occurs to the system, the controller can predict that this change will cause future errors and try to react quickly to prevent the accumulation of these errors. This is called derivative control, because it reacts to the derivative of the error (the rate of change). The more sudden the change in the error, the more this will increase the output.
Making a change that is too large when the error is small will lead to overshoot. If the controller were to repeatedly make changes that were too large and repeatedly overshoot the target, the output would oscillate around the setpoint in either a constant, growing, or decaying waveform. If the oscillations increase with time then the system is unstable, whereas if they decrease the system is stable. If the oscillations remain at a constant magnitude the system is marginally stable. By adjusting the proportional, integral, and derivative terms, you can affect how quickly the controller responds, how much it oscillates, and how stable it is, though there is always a tradeoff between sensitivity, speed, and stability.
The proportional term produces an output value that is proportional to the current error value. The proportional response can be adjusted by multiplying the error by a constant Kp, called the proportional gain constant. A high Kp results in a large change in the control output for a given change in the error.
If the proportional gain is too high, the system can become unstable. In contrast, a small gain results in a small output response to a large input error, and a less responsive or less sensitive controller. If the proportional gain is too low, the control action may be too small when responding to system disturbances. Tuning theory and industrial practice indicate that the proportional term should contribute the bulk of the output change.
Because a non-zero error is required to drive it, a pure proportional controller generally operates with a steady-state error, referred to as droop. Droop is proportional to the process gain and inversely proportional to proportional gain. Droop may be mitigated by adding a compensating bias term to the setpoint or output, or corrected dynamically by adding an integral term.
For the hovering bot example, it is a common solution to add a "basic lift value" to the hovers lift input in addition to the control value of the altitude control system. This value should be just enough to keep the bot hovering without raising or falling. In a running system, a too high basic-lift-value can be identified by having a look at the <control value while setpoint = feedback. If the control value is negative, the basic lift is too high, if it is positive, it may be too low.
The contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. The integral in a PID controller is the sum of the instantaneous error over time and gives the accumulated offset that should have been corrected previously. The accumulated error is then multiplied by the integral gain Ki and added to the controller output. The integral term accelerates the movement of the process towards setpoint and eliminates the residual steady-state error that occurs with a pure proportional controller.
However, since the integral term responds to accumulated errors from the past, it can cause the present value to overshoot the setpoint.
Take care while setting the Ki! Some time may have gone by while the error was non-zero. Even if Ki is set to zero, the integral of the error will accumulate as long as there is a non-zero error value. Therefore, it may have grown very large before you use it and may cause an accidently high control as you set it the first time. This is called "windup". To reset this accumulated error, set the >reset value to 1 before changing Ki, then change it back to 0. You may need to reset this error any time there has been an uncontrolled error for a long period of time, such as when a hovering bot is sitting on the ground or against a wall.
The derivative of the process error is calculated by determining the rate of change of the error over the last tick of time. The magnitude of the contribution of the derivative term to the overall control action is called the derivative gain Kd.
Derivative action predicts system behavior and thus improves settling time and stability of the system. It can accelerate the response of the system as well as dampen sudden changes to the system.
Used within the altitude control system, the Kd will greatly smooth the control outputs. It is able to dampen oscillations and speed up the response to suddenly appearing changes on the feedback altitude (for example when colliding with walls).
There is no bulletproof way to tune a controller properly, because it involves tradeoffs between several goals, but there are many tuning methods that may lead to an acceptable solution.
If tuning manually, there are some simple rules that can help obtain reasonable values:
- Start with all Kp, Ki, and Kd all at 0.
- Increase Kp slowly while disturbing the system. As you increase Kp, you should notice the system respond faster to disturbances. Eventually you will reach a point where the system overreacts and becomes unstable. Bring Kp down to about half of this value. At this point, you should find the system reacts quickly and is stable, but it never quite returns to its setpoint (there is a constant error).
- Slowly increase Ki. Disturb the system and notice that it slowly reduces the error, eventually reaching the setpoint. As you increase Ki, you'll find that it reacts faster, but it will also overshoot the setpoint more and it will take longer to settle down. If you increase Ki too much, it will overshoot further and further, oscillating out of control. Reduce Ki until you find an acceptable balance between the speed at which it reacts and the amount of overshoot and the settling time. Reducing Kp a bit may help reduce the overshoot and settling time.
- Finally, slowly start to increase Kd. You will want to use much smaller values than you did for Ki. You should find that it reduces the amount of overshoot and the time it takes to settle, but if you increase it too much it will quickly become unstable, especially with very sudden movements. If you are having trouble with Kd, you can keep it at 0 and still have a useful controller.