Research Robots Humanoid Applications Industries Technology Contact
← Back to Technology
Robotics Core

Linear Quadratic Regulator (LQR)

Unlock the gold standard for killer control in autonomous navigation. LQR is the math powerhouse letting AGVs nail precise paths while sipping energy, for buttery-smooth runs in messy warehouses.

Linear Quadratic Regulator (LQR) AGV

Core Concepts

State-Space Model

LQR leans on a linear model of the robot's physics (x_dot = Ax + Bu). This predicts exactly how the AGV reacts to motor commands.

The Cost Function

It's a quadratic function that racks up a 'penalty' score, balancing how much you care about hugging the path versus the energy hit from cranking the actuators.

Q and R Matrices

LQR's tuning dials. The 'Q' matrix slaps penalties on state errors (like drifting off-path), while 'R' hits hard on control effort (burning too much battery or voltage).

Optimal Gain (K)

The secret sauce. LQR crunches out a Gain Matrix 'K' that—multiplied by the error—delivers the mathematically perfect control signal to slash costs.

Full State Feedback

LQR assumes you've got the full scoop on the robot's state right now—position, velocity, angle, angular rate—every single timestep.

Riccati Equation

The Algebraic Riccati Equation (ARE) is that beast of a calc you solve offline to nail the optimal K matrix and lock in stability.

How It Works

LQR runs on a feedback loop that sees the big picture (MIMO—Multiple Input, Multiple Output), unlike basic PID which tunes errors one by one. It kicks off by grabbing the AGV's current state from sensors like LiDAR and encoders.

It pits that state against the goal (your path), then computes controls—like wheel speeds—by multiplying the 'error' by the pre-baked Gain Matrix (K).

LQR's magic is in the balancing act. Crank the to obsess over accuracy; tweak the for energy savings or smoother moves. You end up with a trajectory that's mathematically dialed in to minimize 'cost' over time.

Technical Diagram

Real-World Applications

High-Speed Path Tracking

Powers AMR sorters in logistics hubs. LQR lets them fly at high speeds, stick to curvy paths without overshooting, and max out throughput.

Forklift Load Stabilization

A must for automated forklifts. LQR factors in heavy lifted loads' dynamics, tweaking acceleration to dodge tipping or wild swaying on the move.

Self-Balancing Robots

Two-wheeled delivery bots lean hard on LQR. It constantly tweaks wheel torque to keep the center of gravity steady while rolling, reacting way faster than any human could.

Drone Formation Flight

In aerial warehousing or stock scanning, LQR keeps multi-rotor drones hovering steady and spaced perfectly in formation—even fighting wind gusts.

Frequently Asked Questions

What's LQR's killer edge over a basic PID controller?

PID nails Single-Input Single-Output (SISO) setups, but LQR dominates Multi-Input Multi-Output (MIMO). It gets how states interplay (like turns messing with speed) and optimizes everything together—PID needs separate tweaks per loop.

How do you pick Q and R matrix values?

Usually via Bryson's Rule or trial-and-error tuning. Pump up Q's diagonal for snappier error fixes (tighter tracking); hike R for gentler controls that save energy and spare your actuators.

Does LQR work for non-linear bots like differential-drive AGVs?

LQR is purely for linear systems. But for non-linear AGVs, we linearize around a key point (often the target path)—it shines as long as you don't stray too far.

What's the compute hit for LQR on embedded gear?

Runtime is dirt cheap. The big Riccati solve happens offline to spit out K; live, it's just quick matrix math that any microcontroller can handle.

Can LQR handle physical constraints like maximum speed?

Not out of the box. Standard LQR assumes unlimited actuator juice. Demand too much voltage? You hit saturation and suffer. For tight limits, Model Predictive Control (MPC) is better—though pricier.

What happens if my system model is inaccurate?

LQR's model-based, so if your math model drifts far from the real robot, you get lousy performance or instability. Toss in integral action (LQI) to fix steady-state errors from those mismatches.

What is LQG (Linear Quadratic Gaussian)?

LQG teams up LQR with a Kalman Filter. LQR assumes perfect knowledge of every state, but real sensors are noisy or miss some data—so the Kalman Filter estimates the true state and feeds it straight to the LQR controller.

Why is it called "Quadratic"?

This is all about the cost function. Errors get squared (that's the quadratic part), which hits big mistakes way harder than tiny ones. It makes the controller super aggressive at wiping out those dangerous deviations.

Is LQR suitable for dynamic obstacle avoidance?

Usually not. LQR is built for tracking paths, not planning them. Obstacle avoidance typically happens higher up, in a local planner that creates a fresh trajectory for LQR to chase.

How does LQR contribute to battery life in AGVs?

By tweaking the R matrix (that's the control effort penalty), engineers can make the robot ease into accelerations and decelerations. It cuts peak motor current, stretching shift times and battery life.

What sensors are required to implement LQR?

Ideally, grab sensors that deliver the full state vector. For mobile robots, think wheel encoders for velocity, IMUs for orientation and acceleration, plus LiDAR or cameras for spot-on positioning.

Can I update the LQR gain matrix in real-time?

Yep, that's gain scheduling. Pre-calculate different K matrices for varying speeds or loads offline, then swap them in real-time as conditions shift.

Ready to bring Linear Quadratic Regulator (LQR) to your fleet?

Explore Our Robots