In high school calculus, there's a trick to solve complex optimization problems called Lagrangian multipliers. You might use this optimization trick to find the maximum of one curve given another curve as a constraint, e.g. maximizing non-linear utility given non-linear resource constraints.

On a more philosophical level, Lagrange multipliers show the relationship between constraints and optimization. Often, you can't have one without the other. Without constraints, many functions can't be "optimized" – they lack a global (or local) maximum (or minimum). Given enough constraints, all functions can be optimized (take the trivial constraints).

Optimization is often seen as the highest good. Programs that run more efficiently. Processes that run faster. But optimization is a trade-off and optimization is rigid. Especially early on, optimization should be an anti-goal. Instead, solve for optionality and eschew constraints.

Three posts related to the optionality/optimization trade-off.