Sam Gleske on 20 Mar 2013 12:00:52 -0700


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Dumb question about System Monitor



On Wed, Mar 20, 2013 at 12:35 PM, Rich Freeman <r-plug@thefreemanclan.net> wrote:
Back in the P4 days the Intel chips were developing much longer
pipelines than AMD, and that made the cost of a failed branch
prediction much higher.  Their solution was hyperthreading - have more
instructions staged to go on the CPU, and if for whatever reason if
the CPU had to bail and restart on one of them it could instead
process the other one while the other pipeline refilled.  That keeps
the chip busier, and reduces the impact of failed branch prediction.

AMD had shorter pipelines, so they didn't bother with any of that.

I'm not sure what the length of the pipelines are on modern chips, but
whether for that reason or others Intel has kept the feature.  No
doubt implementing it costs them transistors, but their engineers have
obviously decided that's the best use of them right now.

For most purposes you can just pretend you have extra cores and not
worry about it.  Maybe in some realtime situations they might cause
issues (those cores are not truly independent), and perhaps Linux has
some way to disable hyperthreading on some/all cores.  However, I'd
expect your overall performance to drop if you did that, though your
performance on a single thread might pick up (but when that thread
stalls the only way for the CPU to do other work is for the OS to
perform a full context switch).

Threads can run simultaneously just like separate cores.  With hyperthreading there are shared components in the same core across threads so sometimes there's a scheduling block for certain components but when that component is free the next thread that needs it gets it.  Depending on the instructions run within two threads there may be no shared components at all which means both threads will run simultaneously unhindered.  Advances in hazard detection and data forwarding have helped to make this possible and even pipelines make this possible because interfering instructions could actually be using the shared component at a different stage in the pipeline for the instruction so there may be no conflict for resources.

For modern Intel architectures the pipelines are around 40-50 stages.  One of my processor architecture professors said it so I don't have an official documented source for that information.

So yes; detecting hyperthreading as two virtual cores is normal and this behavior should not change because the two threads are capable of acting independent.  If that weren't the case then it wouldn't be hyperthreading and everything would be serialized on the core.  In this case the "virtual cores" refers to "two cores" sharing some components on the processor and not others.  Depending on the expense of the processor it's possible that threads have no shared components at all and everything is duplicated inside of a single core.  What separates dual core and hyperthreaded single core is that the components are physically separated into two separate cores on the processor die.  Functionally they're the same or similar.

SAM
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug