not a fixed number. A pixel is the smallest unit that a dot-based image (a raster or bitmap image) can remember or show.
If you think about an image as a sheet of graph paper, a square on that paper would correspond to a pixel. You can color that square whatever color you have in your color supply (a 256 color setting allows only 256 different color values for a pixel, which isn't all that good). When you color in all of the squares, you get an image. The quality of the image will depend on how small each square is, how many pixels there are to a cm (or inch).
if you think of a photo in a newspaper, these are often made up of a number of tiny dots that you can see if you look at the pic with a magnifying glass. this image would be a low pixel density image; if you blow it up in size, it won't make a very good image.
Around 200 dots per inch is about the level where the quality of the shown image is acceptable, that the eye will fill in the differences. Of course, the eye much prefers a higher density of images, which is the idea behind HD displays. HD displays have a larger number of pixels per unit area.
Now, if your base image only has 20 pixels per cm, and you show it on an HD monitor, it will look like crap, because the display cannot invent values that are not recorded in the original image. The shown image will be very blocky in appearance, probably with blurred edges to the blocks as the display systems tend to average perimeter values when going up scale like that.
The same sort of thing may be a concern when taking a photo and printing it or putting it on a screen. A primary image with a very high pixel density can be exploded in size and retain clarity. However, the price you pay for that is a huge file, a file that sucks up memory. It is a waste of space to have a huge image file if you never show the image anywhere near the remembered pixel density.
But if you take a picture, crop it, and print it out as a poster, you will need all that pixel density in the original to avoid a crappy looking print. You basically are looking at the image through a microscope, and if the image density isn't really tight, it will show in the expanded image. as a bunch of little blocks with blurry edges.
You have to be careful when saving images, because a lot of image file types cut back on the pixel density when saving, by averaging the pixel values.
The flip side is that you use a lot of extra memory to no purpose if you, say, transfer a very high density image into MS word, where word will never show anything near the density of the primary file.
There are file saving protocols that do not remeber things as an array, as a sheet of graph paper, and these tend to be better for saving a file, for minimizing memory use while maximizing image clarity. That is why theree are so many different image file types. They each save the info in different ways.