What's the time complexity for inverse function?

Here's what I'm looking at:

http://en.wikipedia.org/wiki/Big_O_notation

http://en.wikipedia.org/wiki/Computational_complex...

Let's say an iterative function is X(n)= [5+X(n-1)] / [X(n-1)^2]

If X(n-1) gets really big, it's essentially the same as 1/X(n-1).

So its complexity is O(1/n)? That makes no sense.

How complex is the function?

I figured the actual value will be less than that of the division operation, M(n) (see second link) since only 1 variable is inputted. What is M(n) anyways? Is it a special type of Big O notation?

2 Answers

Relevance
• 1 decade ago
Favorite Answer

You are right. O(1/n) makes no sense here.

The big-O notation,O(F(x)), means any function, G(x), which, for large "x", is bounded by constant multiples of F(x): aF(x)<G(x)<bF(x) for some constants b>a>0 and any sufficient large "x". Any constant is O(1), any linear function is O(x), any quadratic is O(x²), the reciprocal of a linear function is O(1/x), etc.

The "Complexity" of a calculation counts the number of basic operations to complete the calculation. The basic operations are often 1 digit (or 1 bit) additions and multiplications, which each require just looking up the answer in a table (of fixed size), taking the same amount of time for any pair of digits and either operation. In this case, the complexity count is a function of the number of significant digits required in the answer - usually the number of significant digits in the input. So complexity O(1/n) means it takes between a/n and b/n steps (for some constants b>a>0) to calculate "n" significant digits - the more digits you ask for, the faster you get the answer!

The "n", in X(n) is the number of times you have calculated the function and has no relation to the number of significant digits in the numbers you are calculating. (In complexity theory, one might consider the number of iterations for the answer to converge to D significant digits, n=O(D). But, your sequence X(n), never converges - except for the fixed point which requires infintiely many digits.) If "D" is the number of significant digits in X(n-1), the complexity of your calculation is the complexity of an addition, a multiplication, and a division on a D digit number: O(D²) using the standard elementary school algorithms. (But there are better algorithms.)

Adding to "n" digit numbers requires a minimum of "n" one digit adds. Including carries (one extra add each time except the first) there is a maximum of (2n-1) one digit adds. So adding "n" digit numbers requires at least 1n operations, but less than 2n operations. The complexity of addition is O(n). Subtraction is just adding a negative, and negation is O(n): O(n)+O(n)=O(n).

Multiplying a 1 digit number times an "n" digit number requires n multiplies and (n-1) adds and possibly another (n-1) adds from carries. At most 3n-2 operations, O(n). Multiplying two "n" digit numbers requires multiplying each digit of one number by the other number, n×(3n-3) operations, followed by n-1 adds of 2n digit numbers, less than (n-1)(2×2n). Altogether, less than 7n². So the complexity is O(n²).

Actually, that analysis uses the algorithm commonly taught in elementary school. There are better methods! (See your own reference.) So many other algorithms depend on multiplication, that it is common to use M(n) for the big-O complexity of the particular multiplication method you are using. If you don't know any other methods of multiplication, M(n)=O(n²).

Now to actually answer your title question: Despite the appearance of this assertion on Wikipedia it is not quite true in general.

Examine the special case of division: The elementary school algorithm is O(n²), the same as the elementary school multiplication algorithm. However, there are faster ways to multiply. You can use these multiplication algorithms to do fast division by applying Newton's method to the function f(x)=1/x-B:

With x(1)=a simple approximation for 1/B, let x[k]=x[k-1]•(2-B•x[k-1]).

For "n" digits accuracy, Newton's method requires k = O(Log_base2(n)) = O(Log(n))

So, ignoring the calculation of x(1), finding 1/B has complexity O(Log(n)×M(n))

Dividing: A/B=A×(1/B) adds just one multiplication so the complexity is still O(Log(n)×M(n)).

In general, finding the inverse of a function {with complexity O(f(n)) at least as great as division} using Newton's method, will be O(Log(n)×f(n)). However, often there are algorithms which are only O(f(n)).

Answer: the complexity of an inverse function is no worse that Log(n) times the complexity of the original function.

Source(s): Here is Newton's method for division, including the number of steps required: http://en.wikipedia.org/wiki/Division_%28digital%2... This paper: http://citeseerx.ist.psu.edu/viewdoc/download?doi=... presents a division algorithm requiring only 2 multiplications, but it requires a table of size √2^n. Including the calculation of the table, that makes O(2×M(n)+√2^n)=O(√2^n).
• Anonymous
5 years ago

Plotting the output of a transfer function shows you the response time, the overshoot, and the damping. All of these are inportant if you wish to come up withan optimum controller.

Still have questions? Get your answers by asking now.