Anonymous
Anonymous asked in Science & MathematicsPhysics · 1 decade ago

We can't predict the decay ofa given radioactively unstable atom, but wecan doso for a % ofa qty of them,why?

Assuming there are no differences in the sample of the unstable atom's subatomic structure, what does this fact mean to the certainty claims of science bec it clearly undermines the predictability of events standard,the reliability of detection and measurement standard, and the repeatability of results standards.

Update:

I asked the question bec it raises issues in philosophy of science. I have never heard radioactivity linked to Heisenberg principle before now though it does raise similar questions.

For those who find thinking about this stuff, I understand, it took me many years of study and reflection to understand these kinds of questions and their ramifications. Oh, and one more thing, let me state for the record to all the smart-*** kids out there who post garbage at Yahoo Answers, a semi-retired financially independent 54 year old person doesn't have homework to do like you do. Your lack of depth is a measure of your degree of ignorance and stupidity, both of which can be reduced if you choose to work at it in a mature manner.

Update 2:

When I added more detail to my original question, I left out the word "difficult." after stuff: I meant to write "To those who find this stuff difficult,..."

I did take a calculus-based course in statistical mechanics course many years ago.

I wish people would address the issues raised by the question asked and stop psychoanalyzing everything about the asker or his motives, none of which is relevant or reliably available to you. Please grow up.

Update 3:

This is a question about radioactive substances, not about human beings and the social science issue of the predictability of individual vs group behaviors.

9 Answers

Relevance
  • Thermo
    Lv 6
    1 decade ago
    Favorite Answer

    That is the law of hughe numbers.

    About the decay of individual nucleus you can predicht nothing.

    However about a measurable amount of isotope you can give its half life time.

    • Login to reply the answers
  • 1 decade ago

    A given atom has a probability of decaying at any given point in time. This means it has a mean lifetime. With a lot of these atoms its possible to apply statistics to find out after what period of time half of the elements will have decayed.

    The uncertainty principle (which can be derived from quantum mechanics, but also classical mechanics) shows us that we can never know certain aspects of particles exactly. The particles that make up an atom don't exist at one precise point, but have a probability of existing within a small sphere. This unpredictability means that the particles (quarks etc) making up the atom can, at some point, be at a position slightly outside of the center of the atom, which means they have enough energy to decay. It's kind of complicated to explain without diagrams.

    Science can never have 100% accurate results. That's why scientists always quote their uncertainty. With decay experiments usually the decayed atoms are "counted" in some way (with a detector and computer-interface). There is a specific error that can be assigned to these (=root(n)). To minimize the error the experiment runs for a long time so that it "counts" loads of times. This makes the uncertainty in measurement less and less.

    Source(s): Physics degree
    • Login to reply the answers
  • 3 years ago

    Atom Decay

    • Login to reply the answers
  • 1 decade ago

    due to quantum mechanics, we now know that we cannot measure anything exactly. thus we can only give a probability that a certain atom will decay. when we have a large sample, with many atoms - 1 gram of substance has about than a billion trillion atoms - we can be sure that close to half will decay over the half life of a material. we will most likely not be exactly correct - we will possibly even be off by a few million atoms - but compared to the initial amount, we are correct.

    • Login to reply the answers
  • How do you think about the answers? You can sign in to vote the answer.
  • Anonymous
    1 decade ago

    Because of the Uncertainty Principle. As you measure smaller and smaller events, it becomes impossible to not screw up the measurement by influencing those events. But, that influence becomes diluted when you measure a population and then average it out. Since we influence things on a macro scale and not on the subatomic scale, this average holds as a reasonable standard of measure and ALLOWS US TO MANIPULATE EVENTS.

    The accuracy of a standard is not the measure of its validity....it's usefulness is.

    • Login to reply the answers
  • 1 decade ago

    It primarily shifts the predictability fromspecific events to statistical properies of those events. Repeatability is attained by making sure sample sizes are large enough to get good statistics.

    • Login to reply the answers
  • beren
    Lv 7
    1 decade ago

    take a stat mech course.

    many macroscopic measurements could be broken down this way. They work because of the laws of probability.

    • Login to reply the answers
  • Anonymous
    1 decade ago

    HOW DOES YOUR QUESTION DIFFER FROM ANY SET OF STATISTICS?

    E.G., IN A GROUP OF 1,000 PEOPLE, WE CAN'T PREDICT WHAT WILL HAPPEN TO ANY ONE PERSON.

    BUT STATISTICS CAN, FAIRLY ACCURATELY, PREDICT WHAT THE DEATH RATE WILL BE FOR THE GROUP.

    • Login to reply the answers
  • 1 decade ago

    do your own homework....nice try...

    • Login to reply the answers
Still have questions? Get your answers by asking now.