What is Hyper-Threading and how does it work?

At first, we had single core CPUs. These CPUs were clocked at a certain speed and could deliver performance at that particular speed. Then came the age of CPUs with multiple cores. Here, every individual core could deliver its own speed independently. This exponentially increased the power of a CPU and thereby increased the overall performance of the computing device. But the human tendency is to always look out for even better. Hence, multithreading was introduced which slightly increased the performance – but then came Hyper-Threading. It was first introduced in 2002 with Intel’s Xeon Processors. With the implementation of hyperthreading, the CPU was always kept busy with the execution of some task.


It was first introduced with Intel’s Xeon chip, and then it made an appearance to the consumer based SoCs with the Pentium 4. It is present in Intel’s Itanium, Atom as well as Core ‘i ‘ series of processors.

What is Hyper-Threading

It is like making the waiting time or latency for the CPU to switch from one task to another being negligible. It allows each core to process tasks continuously without any wait time being involved.

With Hyperthreading, Intel aims to bring down the execution time of a particular task for a single core. This means that a single core of a processor will be executing multiple tasks one after the other without any latency. Eventually, this will bring down the time taken for a task to be executed fully.

It directly takes advantage of the superscalar architecture in which multiple instructions operate on separate data are queued for processing by a single core. But for this, the operating system must be compatible too. This means that the operating system must support SMT or simultaneous multithreading.

Also, according to Intel, if your operating system does not support this functionality, you should just disable hyperthreading.

Some of the advantages of Hyperthreading are-

  1. Run demanding applications simultaneously while maintaining system responsiveness
  2. Keep systems protected, efficient, and manageable while minimizing the impact on productivity
  3. Provide headroom for future business growth and new solution capabilities

Summing up, if you have a machine that is used to pack some box, the packing machine has to wait after packing one box until it gets another box from the same conveyer belt. But if we implement another conveyer belt that serves the machine until the first one fetches another box, it would boost the speed to pack the box. This is what Hyperthreading enables with your single core CPU.

NOTE: The article has been reviewed & edited on 28th December 2018.

Posted by on , in Category General with Tags
Ayush has been a Windows enthusiast since the day he got his first PC with Windows 98SE. He is an active Windows Insider since Day 1 and is now a Windows Insider MVP. He has been testing pre-release services on his Windows 10 PC, Lumia, and Android devices.


  1. Yoann

    This article is largely false.

    SMT does not allow for “multiple tasks to execute simultaneously”; it allows one core to process exactly two threads using the same resources. To maintain the highway analogy, it’s like having two on-ramps for the same one-lane highway. SMT is not technically a form of parallelism, but rather of concurrency.

    SMT bringing performance to an “all-time high” is quite a stretch and it absolutely does not “[…] bring down the execution time of a particular task”. It certainly can help crunch through multiple threads at a faster pace, but any given thread will be running at the same speed or slower than it would without SMT. It can even slow the overall pace of your whole system if two threads are overly competing over the same resources on the same core. Spin locks without a slow down mechanism can cause a lot of trouble.

    Also, SMT does not in any way keep a system secure; in fact, it’s rather notorious for introducing side-channel vulnerabilities. The threads share resources and clever methods can leak information about your “core buddy” that can be exploited.

  2. Arthur

    I came here to say the same thing. Horrible article that seems to come straight out of a 2003 CNet article….

  3. Yes, this article is horrible and gets more wrong than it gets right. Although, SMT does indeed let “multiple tasks execute simultaneously” which is indeed parallel execution. Each core has many execution units and they are shared among the two threads simultaneously. One ALU might be doing an ADD for one thread, while a different one might be doing a SUB for another. This creates more independent instructions and allows the core to keep the execution units more busy.

  4. Ayush Vij

    Thanks for the feedback, The article has been reviewed & edited.

  5. Yoann

    I took issue with strictly defining it as parallelism because the gains are completely dependent on how well the threads interact. If the threads are overly competing, there will be no parallelism, and it will in fact only be concurrent. In the case of independent cores, instructions are truly independent (to be fair the memory system is typically not). I think we can agree that the parallelism gains from SMT are pretty insignificant compared to the gainsadding a full-fledged core to the system.

    However, I did look it up since and according to the definitions, a true SMT implementation is pretty unambiguously a form of parallelism.

Leave a Reply

Your email address will not be published. Required fields are marked *

7 + 8 =