If you were given the equation,
$3x = y$,
then given $x$, it is easy to find $y$: just plug it in. On the other hand, given $y$, to find $x$, we must divide both sides by $3$ to get $x$, which is equally straight-forward.
Suppose, however, you were given two equations:
$3x_1 + 2x_2 = y_1$
$2x_1 + 4x_2 = y_2$,
then given $x_1$ and $x_2$, it is still easy to find $y_1$ and $y_2$; however, given $y_1$ and $y_2$, it is now slightly more difficult to find $x_1$ and $x_2$ that satisfy both of these equations. Try it:
Solve the previous system of two linear equations in two unknowns when $x_1 = 2$ and $x_2 = 3$.
Solve the previous system of two linear equations in two unknowns when $y_1 = -1$ and $y_2 = -6$.
There is also the possibility that there are infinitely many solutions, such as with the system of linear equations
$3x_1 - 1x_2 = 4$
$-6x_1 + 2x_2 = -8$.
If we go to three linear equations in three unknowns, again, it is just as easy to calculate $y_1$, $y_2$ and $y_3$ given $x_1$, $x_2$ and $x_3$; however, it becomes more difficult to go the other way around; for example,
$2x_1 - 1x_2 + x_3= 4$
$-x_1 + 7x_2 + 2x_3= 7$
$-2x_1 + 2x_2 + 8x_3= -2$.
Some systems of linear equations are easier to solve:
$ x_2 = y_1$ $2x_3 = y_2$ $3x_4 = y_3$ $4x_5 = y_4$ $5x_6 = y_5$ $6x_7 = y_6$ $ 0 = y_7$
In this case, the vector ${\bf x} = \begin{pmatrix}2.3 \\ 4.5 \\ -0.3 \\ 7.2 \\ 1.6 \\ 5.9 \\ -8.0\end{pmatrix}$ is mapped to the vector ${\bf y} = \begin{pmatrix}4.5 \\ -0.6 \\ 21.6 \\ 6.4 \\ 29.5 \\ -42.0 \\ 0.0 \end{pmatrix}$; while the solution to the $x$-values if we are given ${\bf y} = \begin{pmatrix}-0.3 \\ 5.0 \\ 5.1 \\ 3.6 \\ 4.0 \\ 8.4 \\ 0.0 \end{pmatrix}$ is ${\bf x} = \begin{pmatrix}c \\ -0.3 \\ 2.5 \\ 1.7 \\ -0.9 \\ 0.8 \\ 1.4 \end{pmatrix}$ where $c$ is any value.
Given the differential equation
$\frac{d}{dx}f(x) = g(x)$,
then given $f(x)$, it is easy to calculate $g(x)$; however, given $g(x)$, it can be a more difficult to find $f(x)$. Not only can it be more difficult, but like the example above, you are always guaranteed that there are infinitely many solutions.
We will given one example: given
$\frac{d}{dx}f(x) = 2x$,
it should be straight-forward to recognize that $f(x) = x^2$ is one solution, but then, so is $f(x) = x^2 + 3$ and $f(x) = x^2 - 5$. In fact, $f(x) = x^2 + c$ is a solution for any real constant $c$.
Just like we say that $g(x)$ is the derivative of $f(x)$, we would say that a specific $f(x)$ is an anti-derivative of $g(x)$. Thus, $x^2$ is an anti-derivative of $2x$, but so is $x^2 + 2$.
Now, every continuous function and every function with a finite number of discontinuities has infinitely many anti-derivatives; however, there is always the question as to whether we can describe these anti-derivatives.
We will ask you to come up with one algorithm:
Describe an algorithm for finding the anti-derivative of $x^r$ where $r$ is any real number.
Input: a variable $x$ raised to a power raised to a real power $r$.
Output: an expression in $x$.
Unlike differentiation, this will require a special case, namely, what is the anti-derivative of $\frac{1}{x} = x^{-1}$.
Now, you know that the general anti-derivative of $e^x$ is $e^x + c$, and with a little thought, you may recognize that the general anti-derivative of $2x e^{x^2}$ is $e^{x^2}$, but can we write the anti-derivative of $e^{x^2}$ in terms of functions we know? The answer is "no".
An elementary function is one composed of rational functions, radicals (e.g., square roots), exponential functions, trigonometric functions and logarithmic functions.
Now, the derivative of an elementary function is an elementary function; however, not every elementary function has an anti-derivative that can be written as an elementary function. Without proof, the following six elementary functions do not have anti-derivatives that are elementary functions:
Now, there is an interesting algorithm that achieves two tasks:
This is called the Risch algorithm, and it takes over one hundred pages to describe the algorithm. You will not learn this algorithm, ever; however, it is implemented in many symbolic computation languages such as Maple.
An example posted by Manuel Goldstein was that
$\frac{x}{\sqrt{x^4 + 10x^2 - 96x - 71}}$
has an anti-derivative that is an elementary function, but
$\frac{x}{\sqrt{x^4 + 10x^2 - 96x - 72}}$
does not.