does dual core need to be turned on?
i know it sounds silly, but my comp dosent seem to be up to par. when i run limewire and windows media player my comp is getting stuck, takes time to change songs and stuff. i have the 2.2 4200 dual core. it seem to be fine before but latly its seems like dual core is lacking....????help
- 1 decade agoFavorite Answer
Dual-Core vs. Dual-Processor
Now, returning to our subject focus for a moment. It has long been obvious that dual-processing was of great benefit for users. In 2000 I bought a dual-processor server motherboard and filled it with Pentium II Xeons (and later Pentium III Xeons). That machine still, to this day, serves me adequately for nearly everything I do, despite it only having a 550MHz clockspeed. The reason? The two processors can be working separately on isolated tasks, meaning that while one CPU is busy compiling something I'm working on, the other CPU is free to handle regular OS requests, GUI updates, my surfing, e-mailing, etc. The only time I really see any notable slowdown is when I run Java apps, but I also see a slowdown running them on a high-speed single-core machine.
Dual-processing is nice, but it does not compare to dual-core. In order for two disparate, separated, and isolated components to work cooperatively together, they need to "talk" to each other. With CPUs the same holds true. Because of the physical implementation of a workload across two (or more) processors, a type of coherency must be maintained between them because it cannot be known in advance which part of memory or I/O port might be being accessed by either processor at any given time. Since by definition either processor could access any piece of memory at any time, the likelihood of either processor needing something the other processor has just used becomes a real consideration.
For this reason multi-processing systems implement a type of "snooping" protocol that basically asks the other processor(s) if any of the required memory locations happen to be in that chip's cache rather than in main memory. The most common implementations are MESI (Modified, Exclusive, Shared and Invalid) and MOESI (Modified, Owner, Exclusive, Shared and Invalid). These protocols represent a sequence of electrically defined "commands" that communicate data states. A chip must issue a request and wait for the responses before acting. If no other CPU is using that memory then it's good to go; if another CPU is using that memory then it must wait for it to be available.
This coherency traffic occurs on the main system bus and is the primary reason Intel suffers in performance when scaling (because Intel uses a shared bus architecture). AMD also has issues with scaling to 8 processors because of the number of direct HT links.
A two- or four-way system can find out from every other processor in only one "hop," meaning each CPU has a "direct line of sight" on this cache coherency roadmap to every other processor. When you jump to 8-way it must make two hops there and two hops back to get to the processor the furthest away. This greatly increases the required wait time before proceeding. The NUMA (Non-Uniform Memory Architecture) can improve upon that, but only if implemented properly.
In a dual-processor configuration there are typically inches between processors. Even traveling at the speed of electricity through circuits on a high-speed interface clock-rate, it takes time for those transactions to occur.
Switching to dual-core, we see that things become a great deal simpler. It is my understanding that today's implementations are little more than bolt-on dual-core implementations, meaning there is no real specialty circuitry that's been created to greatly enhance the inter-core communication between the right- and left-side cores inside the physical processor package. They still operate on HT links, but they just happen to be very close to one another (which speeds things up a bit). Were they to implement a shared L2 or L3 cache or increase the HT speeds internally to operate at much higher frequencies (double their current implementation) for inter-core communication, we would see dual-core performance going up notably. But that kind of development costs money, and chipmakers are in this to make money, not give us the best solution. Can't really fault them for that, however. :)
There really are no two ways about it. If you have any intention of getting a new system, you're going to want to go with a dual-system. Now, the decision as to whether or not you go with a single-socket, dual-core system, or a dual-socket, single-core system is up to you. With a dual-socket, single-core system you have the potential of upgrading in the future to dual-core chips, making your two-way system suddenly become a four-way system. That may be of interest to some folks and could, potentially, nearly double your system's performance in certain apps.
Benchmarks have shown us that dual-core is better than dual-processor. Economics have shown us that dual-core is cheaper than dual-processor. And user experiences have shown us that even the somewhat slower clocked (and cheaper) dual-experience greatly exceeds even a fast single-core system.
Multi-cores are the way of the future. We've heard Intel talk about "hundreds of cores" on a single CPU. We have Cell technology allowing many, many specialized cores working in cooperation to process data. In short, we have hints of what's to come, but the ultimate choice will be one that evolves practically through economics and usability. As wonderful as some products are from a design point of view, they don't always have great utility in the real world (consider Itanium and its original roadmaps).
- 1 decade ago
Other than getting your BIOS to recognize your CPU, there is no other update you need in order to use a dual-core CPU.
The dual-core should speed things up but remember that not everything has been made to take advantage of the dual-core architecture so some programs will not be faster and will run the same as previously.