# Difference between fzero and fsolve for one variable

Is there a difference between using fzero and fsolve for a single variable equation?

## Answers

Yes, there is. I'll just mention the most straightforward difference between the two:

fsolve can be used to solve for the zero of a single variable equation. However, fzero will find the zero *if and only if* the function crosses the x-axis.

Here's a simple example: Consider the function f=x^2. The function is non-negative for all real values of x. This has a root at x=0. We'll define an anonymous function as f=@(x)x.^2; and try to find the root using both the methods.

**Using fsolve**

options=optimset('MaxIter',1e3,'TolFun',1e-10); fsolve(f,0.1,options) Equation solved. fsolve completed because the vector of function values is near zero as measured by the selected value of the function tolerance, and the problem appears regular as measured by the gradient. <stopping criteria details> ans = 1.9532e-04

Not zero, but close.

**Using fzero**

fzero(f,0.1) Exiting fzero: aborting search for an interval containing a sign change because NaN or Inf function value encountered during search. (Function value at -1.37296e+154 is Inf.) Check function or try again with a different starting value. ans = NaN

It *cannot* find a zero.

Consider another example with the function f=@(x)x.^3; that crosses the x-axis and has a root at x=0.

fsolve(f,0.1) ans = 0.0444 fzero(f,0.1) ans = -1.2612e-16

fsolve doesn't return exactly 0 in this case either. Even using the options I defined above only gets me to 0.0017 with fsolve. However, fzero's answer is correct to within machine precision!. The difference in answers is not because of inefficient algorithms. It's because their **objectives are different**.

fzero has a clear goal: find the zero! Simple. No ambiguities there. If it crosses the x-axis, there *is* a zero and it will find it (real only). If it doesn't cross, it whines.

However, fsolve's scope is broader. It is designed to solve a system of nonlinear equations. Often you can't find an exact solution to those equations and will have to set a tolerance level, within which you're willing to accept the solution as the answer. As a result, there are a host of options and tolerances that need to be set manually to massage out the exact root. Sure, you have finer control but for finding a zero of a single var equation, I consider it a pain. I'd probably use fzero in that case (assuming it crosses the x-axis).

Apart from this major difference, there are differences in implementations and the algorithms used. For that, I'll refer you to the online documentation on the functions (see the links above).

While I like the answer given by yoda, I'll just add a few points. Yes, there is a difference between the two functions, since they have different underlying algorithms. There must be a difference then. So what you need to do is to understand the algorithms! This is true of any comparison between different tools in matlab, both of which should yield a solution.

You can think of fzero as a sophisticated version of bisection. If you give a bisection algorithm two end points of an interval that brackets a root of f(x), then it can pick a mid point of that interval. This allows the routine to cut the interval in half, since now it MUST have a new interval that contains a root of f(x). Do this operation repeatedly, and you will converge to a solution with certainty, as long as your function is continuous.

If fzero is given only one starting point, then it tries to find a pair of points that bracket a root. Once it does, then it follows a scheme like that above.

As you can see, such a scheme will work very nicely in one dimension. However, it cannot be made to work as well in more than one dimension. So fsolve cannot use a similar scheme. You can think of fsolve as a variation of Newton's method. From the starting point, compute the slope of your function at that location. If you approximate your function with a straight line, where would that line cross zero?

So essentially, fsolve uses a scheme where you approximate your function locally, then use that approximation to extrapolate to a new location where it is hoped that the solution lies near to that point. Then repeat this scheme until you have convergence. This scheme will more easily be extended to higher dimensional problems than that of fzero.

You should, nay, must see the difference. But also, you need to understand when one scheme will succeed and the other fail. In fact, the algorithms used are more sophisticated than what I have described, but those descriptions encapsulate the basic ideas.