Why do all cores in the CPU always have the same speed?

If you compare when buying a new CPU, you may realize that all CPU cores have the same speed. Today's article will tell you why.

On the Q&A section of the SuperUser page, reader Jamie posted a question when she wanted to know why the CPU cores had the same speed.

When buying a new computer, you will determine which processor you will use based on the amount of work needed. If the game is played, it will need a single core, but with video editing software, it may need multiple cores. In the market, all CPUs seem to have the same speed, the main difference is multi-core and multi-threaded.

Eg:

  1. Intel Core i5-7600K, 3.80 GHz base frequency (base frequency), 4 cores, 4 threads.
  2. Intel Core i7-7700K, 4.20 GHz frequency base, 4 cores, 8 threads.
  3. AMD Ryzen 5 1600X, base frequency 3.60 GHz, 6 cores, 12 threads.
  4. AMD Ryzen 7 1800X, base frequency 3.60 GHz, 8 cores, 16 threads.

Why give it more cores while all cores have the same clock speed (clock speed)? Why is there no other clock speed? Examples are 2 large cores and many small cores.

Instead of 4 4.0 GHz cores (such as 4x4 GHz, up to 16GHz) why not two 4.0 GHz and 4 core 2.0 GHz cores (eg 2x4.0 Ghz + 4x2.0 GHz, maximum 16 GHz )? Is the second option equivalent to single-threaded wordload, but is there a better potential than multithreaded wordload?

I ask such a general question, not specifically about the CPUs mentioned above or which specific workloads.

Picture 1 of Why do all cores in the CPU always have the same speed?

Why do CPU cores have the same speed?

The bwDraco user on SuperUser has the answer:

This is called multi-heterogeneous processing (HMP) and is widely accepted on mobile devices. With ARM core devices using big.LITTLE, the processor contains cores with different power and power consumption, for example, some cores run fast and consume a lot of power (faster architecture and / or higher clock speeds) while other cores are less efficient and slower (lower architecture and / or slower clock speed). This is useful because electricity consumption tends to increase unreasonably when you increase usage and pass a certain point. The problem here is how to keep up with the needs when you need them, and when not, get the longest usage time.

On the desktop, power consumption is not a big issue, so this is not necessary. Most applications want each core to be able to work in the same way, planning the processing for a more complex HMP system for symmetric systems (SMP). (Technically, Windows supports HMP but mostly for mobile devices using big.LITTLE).

Most desktop and laptop processors are not limited to heat and electricity, to the point where some cores need to run faster than other cores, even faster. We basically got stuck in creating the fastest single core, so replacing some slower core cores would not allow the remaining cores to run faster.

While there are some desktop processors with one or two cores that can run faster than other cores, this capability is now limited to Intel high-end processors (Turbo Boost Max Technoloty 3.0) and is only available. Add a bit of performance with the cores running faster.

Although it is possible to design traditional x86 processors with faster, larger and smaller cores, to optimize heavy workload optimization, this will make the processor design more complex and applications difficult to support. support.

Consider two slow 2 Kaby Lake (7th generation) and 8 Goldmont core (Atom). You will have a total of 10 cores, heavy workload optimized for this processor that can bring higher efficiency than conventional 4-core Kaby Lake processors. But different cores will have different efficiency levels, slow cores won't even support some fast-core instructions that are supported, like AVX (ARM avoids this problem by requiring both 'big' and 'cores'). LITTLE 'supports the same instructions).

Windows multi-threaded applications assume that each core has the same or near the same level of performance, which can be executed with the same instructions, so this asymmetry will lead to less ideal performance, perhaps even causing a crash if not supported by instructions. Although Intel can modify instructions core slowly, it does not solve the problem with software support for other heterogeneous processors.

Another approach when designing applications, close to what you're thinking about in your question, can use the GPU to speed up parallel portions in the application. This can be done through APIs such as OpenCL and CUDA. With a single-chip solution, AMD encourages hardware to support GPU Accleration in APU, combining traditional CPUs and high-performance integrated GPUs on the same chip, like HSA (Heterogeneous System Architecture), though not very attractive. in addition to some specialized applications.

Update 25 May 2019
Category

System

Mac OS X

Hardware

Game

Tech info

Technology

Science

Life

Application

Electric

Program

Mobile