what is difference between pixel vs cm?

one cm equals to how much pixel?

2 Answers

  • Anonymous
    9 years ago
    Favorite Answer

    1 cm is a definite dimension. its based on the SI unit Meter. i.e 1/100 mt.

    1 pixel = 1 dot used MAINLY in computer / printing.

    What a pixel is exactly is difficult to explain 1 a few lines. But in brief, almost any image/character can be made by making small dots using a pencil on a paper. Eg, you can draw a line using a pencil an a paper, by making a number of dots in the path of the line to be drawn..

    You can also draw a circle using pencil and paper by making dots.. the more dots u plot the better the circle drawn will appear (well defined).

    Pixel is based on the same principle. Your PC/Laptop screen is made up of millions of dots, when these dots are illuminated the create patterns. The computer aim is to illuminate the screen dots (pixels) in such a way that it represents image/characters.

    So, a computer screen with a low resolution will have fewer dots/ cm.

    a screen with higher resolution will have much more dots/ cm.

    the same principle applies for printing or camera sensors rated in Mp.


    the example i gave is just to illustrate what a pixel is and show that it is not a definite dimension. But in reality its much more complicated, because screen are composed of 4 colours!! 3 of which are Red Blue Green, and the 4th which everyone omits is the natural screen color(usually darkgrey/black), when it is off!

    with RGB mixture you can create almost all colours, and when these RGB pixels are off, they appear as BLACK! when all of them are illuminated (to the right proportion)they are WHITE!.

    so... It gets a bit more complicated.. but the basic principle is the same!..

  • 9 years ago

    not a fixed number. A pixel is the smallest unit that a dot-based image (a raster or bitmap image) can remember or show.

    If you think about an image as a sheet of graph paper, a square on that paper would correspond to a pixel. You can color that square whatever color you have in your color supply (a 256 color setting allows only 256 different color values for a pixel, which isn't all that good). When you color in all of the squares, you get an image. The quality of the image will depend on how small each square is, how many pixels there are to a cm (or inch).

    if you think of a photo in a newspaper, these are often made up of a number of tiny dots that you can see if you look at the pic with a magnifying glass. this image would be a low pixel density image; if you blow it up in size, it won't make a very good image.

    Around 200 dots per inch is about the level where the quality of the shown image is acceptable, that the eye will fill in the differences. Of course, the eye much prefers a higher density of images, which is the idea behind HD displays. HD displays have a larger number of pixels per unit area.

    Now, if your base image only has 20 pixels per cm, and you show it on an HD monitor, it will look like crap, because the display cannot invent values that are not recorded in the original image. The shown image will be very blocky in appearance, probably with blurred edges to the blocks as the display systems tend to average perimeter values when going up scale like that.

    The same sort of thing may be a concern when taking a photo and printing it or putting it on a screen. A primary image with a very high pixel density can be exploded in size and retain clarity. However, the price you pay for that is a huge file, a file that sucks up memory. It is a waste of space to have a huge image file if you never show the image anywhere near the remembered pixel density.

    But if you take a picture, crop it, and print it out as a poster, you will need all that pixel density in the original to avoid a crappy looking print. You basically are looking at the image through a microscope, and if the image density isn't really tight, it will show in the expanded image. as a bunch of little blocks with blurry edges.

    You have to be careful when saving images, because a lot of image file types cut back on the pixel density when saving, by averaging the pixel values.

    The flip side is that you use a lot of extra memory to no purpose if you, say, transfer a very high density image into MS word, where word will never show anything near the density of the primary file.

    There are file saving protocols that do not remeber things as an array, as a sheet of graph paper, and these tend to be better for saving a file, for minimizing memory use while maximizing image clarity. That is why theree are so many different image file types. They each save the info in different ways.

Still have questions? Get your answers by asking now.