You don't, really.
A computer programmer can transform the infinite sum to an equivalent "closed form" expression that can be computed with a finite number of operations. A good example is a an infinite geometric series.
S = a + ar + ar^2 + ar^3 + ...
That has an infinite number of terms, so a program to calculate that directly will never finish. However, someone who remembers high school algebra might know that this sum does converge to a finite value, provided that |r| < 1. Specifically:
S = a / (1 - r)
So, instead of an infinite number of multiplications and additions, you get the answer with one subtraction and one division. (...plus a couple of compares, maybe, to ensure that -1 < r < 1.)
Another approach depends on fact that floating point numbers only carry a fixed number of significant digits. You can just continue adding terms, and then stop when the new terms are too small to affect the sum.
A small modification to that idea is to simply stop when the current sum is "close enough". That's really easy in an alternating series, where the sign strictly alternates between + and - and the magnitudes of the terms are strictly decreasing and approach zero in the limit. An example is:
4 - 4/3 + 4/5 - 4/7 + 4/9 - 4/11 + ....
That infinite series converges, an the error in the sum is always less than the last term added; so you can simply add terms until the error is small enough.
That infinite series adds up to pi, by the way, and it's just about the worlds slowest way to try to compute pi. add up a million terms and you only get accuracy to 5 or 6 decimal places. But it is an easy example to type up.