You have some correct answers, the best one so far being anonymous for some reason. I'll throw in some history, just in case anyone is interested.
The "E notation" for real numbers began with early computers. It was certainly in use in FORTRAN, one of the earliest programming languages, and may have been used in machine language programs before that. In the 1950s, computers were still quite new and input/output devices were quite simple. Typically, device could print digits, capital letters and a handful of punctuation characters.
So, the capital E (for "expoenent") was used as a compact way of showing the exponent of a number in "scientific notation". The "times ten to the power" part was implied.
To get printed output to line up, the decimal point and E needed to be in the same position on every line, and the "one's digit" of the exponents needed to line up. Since a minus sign might be needed for a negative exponent, a position was left for that. Some programs would print a space there if the exponent was positive; others would print a + sign, so that's where the + and 0 come from 1.E+02, and why the decimal point is there.
A modern programming language, with free-form input, would accept 1E2 as meaning the same thing: 1 times 10 to the 2nd power, or 100.
Later, early electronic calculators used this idea. Those used 7-segment LED output devices that could (barely) represent the ten digits needed, but could manage to display an E.
Today, it's used in nearly every programming language to represent floating point data in scientific notation, and is often used online in technical forums.