Talk:Overclocking/Archive 2
From Wikipedia, the free encyclopedia
| This is an archive of past discussions about Overclocking. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
| Archive 1 | Archive 2 |
Some quantitative info needed in intro.
This article needs to make sense to people who know little about computers. Technical information is fine, but some general things need to be said. In particular the introduction needs to give an idea of what can be gained (and lost!) by overclocking; in particular, how much is speed increased, numerically? I've added a short paragraph and mentioned 20% as a ballpark figure for speed increase. If you have absolutely no idea, you may think that multiplying the speed by a large factor is possible by increasing clock speed. My contribution could be improved, with more detailed discussion in the body and a rough idea of what's to be gained in the intro.
Pol098 (talk) 16:28, 28 December 2011 (UTC)
- Good point on targeting ballparks, but overall, on Intel's current K-SKUs 20% is blowing the bell curve. The average for a 4790K is 4.6. I'm just outside the documented average at 4.7 GHz. On most Haswell / Devil's Canyon, 20% is too far outside the curve to call an average. The averages for the Broadwell C-SKU may never be known, because nobody in their right mind is going to downgrade their system by buying one. Skylake is too new, and so far shows less overclockability (24/7 stable) than Haswell / Devil's Canyon. The good news with Skylake-K is that there is no FIVR on the substrate, making things easier for those of us who prefer razors to remove the gap caused by the glue. Because they reduced the thickness of the substrate, the vise method popular from Ivy Bridge through Broadwell will damage the LGA substrate on the Skylakes. Go to youtube and watch some tutorials on the Razor method for delidding if you want to reduce temps by an average of about 15C (I pretty much hit the mean there). The die hasn't touched the IHS in LGA115x chips since Sandy Bridge. The problem is that black glue they used to glue down the IHS. Their "improved TIM" is still pretty thick in Devil's Canyon, but not just gobbed in there like with Haswell. On average though, nobody can really figure out what thermal improvement there is to their "new TIM". My theory is that it was a marketing gimmick with no basis in reality, and I base this statement on the qualitative testing of thousands of people, all documenting their work, of which I am one. The cool thing is how Ga will actually alloy with Cu, and thus give better thermal transfer than even solder. Idiots complaining about their LM TIM not working any more, and doing strange stuff with high temps, are the kind of idiots this article stereotypes, in that they actually crossed the phase change point with Ga and Ni, and that's well above the 100C maximum of the chips, possibly as high as chip-level automatic shutdown at 130-something C. Yes, metallurgy plays a role in serious overclocking as well thanks to Intel's unwillingness to fix their problem, even if it means adding more cores to make mainstream dies large enough to solder again. A good reference would be the ASM Handbook, Volume Three (Phase Diagrams), along with some basic searches of recent (the last decade) papers on the Ga-Ni binary. At about 112C (or is it 121C?), IIRC, there is a phase change accompanied by a volume change, and this describes what is being described by the idiots complaining about a certain condition with Ga-based LM TIM that nobody doing things right ever actually has happening to them, that, combined with character judgement on the ones having it happen to them, which is far more telling of cause. One side observation is that based on the small numbers of those actually having this happen to them, and the fact that they do fit the stereotype that this article leaves the impression of, further underscores the fact that this article tries to make the majority of overclockers look like what is in reality only a tiny minority of retards that just do it with no common sense or knowledge. 69.49.217.158 (talk) 15:30, 23 October 2015 (UTC)
The CPU frequency limit
I think it should be noted the difference between the highest complete CPU speed(~8GHz by AMD), and the highest "chip" speed(~500GHz) as in the article. 72.152.44.48 (talk) 19:03, 9 November 2011 (UTC)
- Actually, neither really have that big of a place here. Highest CPU speed records change quite often. Unless it's in table form sorted by date, and CPU, there is no point in putting it in without a caveat that it was on a certain date, and that these records flip flop between manufacturers, and stating that new records are being set on a regular basis. The way it's written makes it look like standard AMD Fanboy tripe. I suspect that that marketing strategy originated in their Texas offices, such dishonesty could only come from Texas. 69.49.217.158 (talk) 16:07, 23 October 2015 (UTC)
The OverclockP4.jpg picture is a hoax
The picture is placed under the "Incorrectly performed overclocking" section, and shows a Pentium 4 CPU that has clearly been subject to physical abuse: All the pins have been pent and there are scratches, cracks and dents all over the chip and heat shield. It looks like the uploader has simply beaten the chip with a hammer and set it on fire. I suggest removing it. Tachylatus (talk) 11:53, 19 July 2011 (UTC)
I am no longer in doubt that this is a hoax, and therefore removing the picture. There are several evidences of deliberate abuse:
- A hole has been drilled through the heat shield in a place where it is not even in contact with the CPU core. Traces of burnt material on the inside of the heat shield show that heat entered through the hole in order to burn the chip beneath it.
- The pins are bend and scratched in a way that strongly suggests that physical abuse has been taking place.
- The large crack seen on the top left picture, scratches, paint marks and the shape of the dends on the heat shield in the top middle picture, supports the theory that a hammer or similar black-painted object was utilised to damage the CPU.
- Even if the overclocking resulted in overheating at several hundred degrees centigrade, it does not explain the bending of the whole CPU chip, which is normally held in place in the motherboard socket.
- There are marks/cracks on one of the outer edges of the processor that can only originate from externally applied force.
Tachylatus (talk) 13:42, 19 July 2011 (UTC)
- Such dishonesty only could have come from a christian, this one is so dishonest, I suspect Texastani Pentecostals. 69.49.217.158 (talk) 16:13, 23 October 2015 (UTC)
Real world benefit
Article says:
> It is generally accepted that, even for computationally-heavy tasks, clock rate increases of less than ten percent are difficult to discern. For example, when playing video games, it is difficult to discern an increase from 60 to 66 frames per second (FPS) without the aid of an on-screen frame counter.
This is missing the point. I once used a 166MHz system that couldnt keep up with its tasks, so the end user experienced very frequent and problematic delays in response in one app. Clocking it up a notch solved the problem completely, beacuse the extra performance enabled the system to keep up with its software demands pretty much all the time, so the delays vanished.
Tabby (talk) 01:23, 3 March 2011 (UTC)
- Yet more dishonesty of the author of the article. It is in computationally intensive tasks where overclocking shines. If whomever wrote that was in any way making an observation here based on personal experience, that person was running non-optimized software anyway, and probably not enough RAM for the task forcing the VM into thrashing waits. non-io-bound computationally intensive applications will scale in speed directly in the same percentage as the clock speed, as long as it is optimized properly and doesn't involve a lot of cache misses, have unaligned access, or any of the other "don'ts". Oh, and because of the sheer amount of consumer-oriented poorly optimized applications, Skylake reduces the unaligned access penalty down to I think three or five cycles, down from hundreds of cycles in previous generations. Funny how long it took for shoddy code to force intel to make a change that should have been made right after the i486. — Preceding unsigned comment added by 69.49.217.158 (talk) 16:26, 23 October 2015 (UTC)
article seems about fine for me in that ina middle range pc you will not detect any noticble change with the naked eye. for an example like you provided chances were your pc was borderline inadequate to begin with and the extra overclocking was just the minimum needed to push it over the finish line so to say. in my local area the general gaming rule of thumb is over upgrade before overclock.152.91.9.153 (talk) 10:43, 10 June 2011 (UTC)
Culture and Media
Some discussion of the culture of overclocking needs to be presented.
Why, for example, do computer enthusiast web sites and periodicals focus so much attention on overclocking to the exclusion of other modes of performance enhancement?
- Why do you wash your car instead of buying a new one? Why do you give your car a tuneup instead of buying a new car? Why do you put new headlights in your car, or better fuel injectors, instead of buying a new car? Why do you buy new stereo speakers instead of a whole new home entertainment system? It's cheaper to leverage what you have. 69.49.217.158 (talk) 08:48, 24 October 2015 (UTC)
Why, for example, did EISA and PCI-X bus architectures, Xeon x86 systems with large cache and multiprocessing support, and faster memory architectures, get drowned out in media by focusing on overclocking?
- Because EISA and PCI-X were inefficient, and highly limited in what they can do, and were essentially a dead-end path. The entire enterprise systems market uses PCIe now, you clueless newbie. 69.49.217.158 (talk) 08:48, 24 October 2015 (UTC)
Why is so much attention focused on overclocking the CPU on a cheap system, when there are other factors which can improve total system performance? Why isn't more attention focused on building a workstation or server with robust and fast hardware, instead of a cheap PC which is then overclocked?
- Cheap? AMD fanboys maybe, but my Intel system is pushing $3000. Workstations and servers usually don't have the speed or feature set, and tend to be far more conservative in what they can do. If an enthusiast needs 18 cores and 36 threads and has $5000 in his pocket he can go out and get a xeon to do the job, but the clock rate is just over 2 GHz which is kinda small for doing anything other than highly threaded workloads efficiently. I'll stick with my quad core 4.7 GHz i7-4790K, thank you very much, per core, it kicks ass on anything in the server market at the time you wrote this tripe. Oh, and if you have a Quadro K6000 you can toss me, I'll take it, but in the meantime my 980 Ti will do the trick better and faster, including double precision. Works great with SolidWorks. 69.49.217.158 (talk) 08:48, 24 October 2015 (UTC)
Why do manufacturers manufacturers market overclocking capabilities instead of emphasizing other desirable features?
- Because in our segment we demand high grade components, high grade audiophile level sound, and more than three power phases (tell me how long your non-OC chip is going to last on the nasty power solutions of commodity boards!!! Ever look at the power output of a commodity board on an oscilloscope? It looks like noisy shit. I prefer lab grade power, better mind you, than the average server. Your questions here indicate to me that you, and most of the others here have no clue as to what a high performance microcomputer is, or what it would become, so since you don't understand it, you demonize it. This is why I ask if this was written by christian homeschoolers. That's their modus operandi on the entire internet. 69.49.217.158 (talk) 08:48, 24 October 2015 (UTC)
My server machines generally run circles around overclocked PCs of the same era, yet there is very little acknowledgment of this in the culture and marketing of PCs.
- What is your use-case? What is your instruction mix? The only advantage servers have in speed is in highly parallel muti-threaded applications, and even then, I can put an server board in without issue in my chassis, and run 36 cores / 72 threads, too, but per-core is going to suck on that, and that's the hottest server chip available in 2015. A K6000 GPU runs about the same price as those chips. So you are attempting to compare a let's say $40,000-60,000 server to a PC? Five bucks says my resume can secure a higher pay than you in the IT department at your company too. The OC scene has changed over the past decade. It's not pimply kids using duct tape and room fans. Are you a COBOL programmer? Just asking. Similar mentality. Odds are I was hacking into X/MP's and such when you were in kindergarten. Now I have a machine that has been benched using Cray's own benchmarks to equal 84 X/MP-4 computers, and that's just on the CPU alone. (AVX2 optimized Intel HPF, using the Intel equivalents of the Cray vectorization directives, and with all Cray vectorization instructions followed in the source code. Apples to apples). Your server doesn't have a clock speed fast enough to even match that on a per CPU basis. Cycle for cycle, you would require 18 threads just to match the speed of my eight. 69.49.217.158 (talk) 08:48, 24 October 2015 (UTC)
It's as though overclocking is a kind of fetish which misses the forest for the trees. —Preceding unsigned comment added by 131.107.0.73 (talk) 19:09, 14 September 2009 (UTC)
- Trite 69.49.217.158 (talk) 08:48, 24 October 2015 (UTC)
- Cost mainly; why spend an extra $100 on a faster CPU, if you can simply overclock to the same levels of performance? Thats one reason why Intels Q6600 has been so successful: The chip is fully capable of reaching 3.0 GHz (from 2.4GHz) without adding heat or needing a voltage increase. So why spend extra cash for a more expensive component?
- Very good point, and most of the ones that were overclocked correctly are still running in 2015. I speak of the Q6600. 69.49.217.158 (talk) 14:31, 23 October 2015 (UTC)
- As for some of your other points: PCIX was mainly a 64-bit enhancement to PCI. It had the same overall limitations (mainly bandwith) and AGP and later PCI-E became dominant. Manufacturers cater to overclockers because they are a good portion of their user base and can charge a price premium on parts. Server machines, typically prior to the late 90's, used significantly higher end parts, so even an overclocked PC from that era would lose head to head. Of course, there was also a signficant cost difference between the two. --Gamerk2 (talk) 13:19, 19 March 2010 (UTC)
Some Changes
I'd like to add something to the article about the entertainment side of overclocking; some people overclock knowing full well something could or will go 'wrong,' but overclock anyway. Similar to a bike enthusiast who burns out on an old tire expecting it to pop or even a guitarist who smashes an old guitar. The article already mentions one test where the clock speed was increased to 500 GHz despite this being impractical. It's this impractical side that I think should get a little attention. I wrote this:
Similar to other activities enjoyed by enthusiasts and hobbyists, there is both a practical side and an entertainment side to overclocking. While overclocking is typically done to increase performance, hardware can be overclocked to test the limits of said hardware even if the practical benefits are negligible. In some cases, components may be overclocked so that people can witness or enjoy the negative effects of overclocking, like extreme heats and malfunctions. Some speeds may be attained only through impractical cooling methods, and at great risk to other system functions. In these cases, users may be testing the limits of hardware without practical concerns of performance in mind. —Preceding unsigned comment added by Rapturerocks (talk • contribs) 19:39, 4 December 2009 (UTC)
- Christian homeschooler. Who else can spout such dishonesty. "Rapturerocks" confirms some of my theories about the tone of this article. It was written by christian homeschoolers, the most untrustworthy and dishonest people in America at least. I refuse to hire them, I have to have trust that my employees aren't going to lie. You describe here a fringe element of overclockers that probably doesn't even come to one percent. Ovrerclocking is a scientific process, something that your mommy probably didn't teach you about. 69.49.217.158 (talk) 09:03, 24 October 2015 (UTC)
Disclaimer
"This article is so full of disinformation from the "cons" side that it is best to do your research on this topic on a site other than Wikipedia. No effort has been made to remove the inaccuracy, and outright falsehoods in over a decade. All attempts to do so are reverted by people whose comments indicate a homeschooled mentality with second-hand knowledge of what they are writing, most of that being propagandistic. Any attempt to even address the wild inaccuracies, in a disclaimer, are summarily reverted as well. DO NOT CONSIDER THIS ARTICLE TO BE ACCURATE, TRUTHFUL, OR UNBIASED."
The preceeding heading is being added to the top of the page. Unless this entire article is completely re-written, the disclaimer should stay. — Preceding unsigned comment added by 69.49.217.158 (talk) 03:12, 28 October 2015 (UTC)
Request to add more advantages
Could someone add some more advantages? This article is starting to be a little one-sided! 62.252.192.7 12:29, 26 Dec 2004 (UTC)
Sysextreme link spamming
I've reverted the edit that put sysextreme.com in the major forums list.
Alexa ranks this site at about 4,500,000th most visited website, while others in this section are about 16,000 to 40,000th. With less than 1% of the visits the real major forums get, Sysextreme is not big enough to be listed here. WikianJim 16:46, 26 Apr 2005 (UTC)
- They now are using "XMS.MS" to try to promote their site.
BCLK OC'ing
Would be nice to get this as a separate (sub)section and delve more into it, including the fact that Intel allows to change the CPU BCLK without affecting other buses which makes OC'ing safer. And how they accidentally forgot to enforce BCLK OC'ing on Alder Lake 12th gen CPUs. Artem S. Tashkinov (talk) 11:57, 21 January 2022 (UTC)
BCLK OC'ing
Would be nice to get this as a separate (sub)section and delve more into it, including the fact that Intel allows to change the CPU BCLK without affecting other buses which makes OC'ing safer. And how they accidentally forgot to enforce BCLK OC'ing on Alder Lake 12th gen CPUs. Artem S. Tashkinov (talk) 11:57, 21 January 2022 (UTC)
BCLK Overclocking
Would be nice to get this as a separate (sub)section and delve more into it, including the fact that Intel allows to change the CPU BCLK without affecting other buses which makes OC'ing safer. And how they accidentally forgot to enforce BCLK OC'ing on Alder Lake 12th gen CPUs. Artem S. Tashkinov (talk) 12:41, 25 August 2022 (UTC)
Giant character string
XS Spam
I've reverted the edits by "FUGGER" because of blatant spam. He owns XtremeSystems (not to be confused with sysextreme.com). Putting "These and more top names can be found at XtremeSystems.org (linked below)." in the intro of the article is spam. Furthermore, adding the link to the XtremeSystems homepage as an overclocking resource when the home page has very little (or none at all?) original articles about overclocking and contains only links to other sites or the XtremeSystems forums. Therefore, having a link to XtremeSystems in both the links of resources and forums is redundant. Also, removing XtremeResources.org from the forum listings is debatable as the site has multiple members in the top 10 in various versions of 3dMark and the forum provides a number of overclocking tips. I just wanted to clarify why I reverted the edits, in case people were wondering. - PS2pcGAMER
- I have been talking with "FUGGER" via Talk pages and email, and I we have both agreed on some ideas... Notable names in overclocking SHOULD stay at the end of the article, and FUGGER is no exception. I suggested that he give points at the end of the links, and cite particular web articles instead of google searches, to which he also agreed. Lastly, I suggested that he make an article that curtails Overclocking breakthroughs and their record owners, and wiki-link that instead of just posting notable names at the end of the article. I haven't heard from him on the last bit, but this is all being cleared up, so no worries. This guy was kinda "spamming", but he was simply trying to put content back into the article that was originally there.
- Fair enough. I just thought his additions were not in good taste and they weren't really beneficial to the article. I will also re-add the link to XtremeResources to bring things back to how they should be. Hopefully that will be the end of this. - PS2pcGAMER
Condradictory "Disadvantages" Section
This section repeatedly makes a negative statement about overclocking, then contradicts it with a statement about how it really isn't a problem. It really ought to be changed.
Recent reversion
The editor who added the content that I recently reverted asked me to detail my reasoning for doing so. I'll start with the incorrect explanations. First, higher transistor density does little to make "electrons travel faster". Carrier mobility (the net drift velocity response per unit electric field) is largely related to temperature and band gap energy (through its relationship with carrier effective mass) and to various scattering mechanisms (lattice/phonon, ionized impurity, etc). Making devices smaller doesn't make electrons flow faster, though it does reduce the amount of charges that must be moved around to change transistor modes (which is a large reason smaller transistors are faster). To make electrons (and holes) truly flow faster, you would need to increase the electric field. However, generally VLSI houses try to practice constant electric field scaling (that is, scale voltages with the dimensions to keep electric field roughly constant) in order to avoid hitting carrier saturation velocity too soon as well as a host of breakdown mechanisms (avalanche, tunneling, etc). In reality, constant electric field scaling isn't precisely followed due to some real-life difficulties, but that's a much more advanced discussion.
Now for the ill-drawn conclusions. I would not agree with the assessment that smaller transistors automatically imply better ability to overclock. It is a factor, but it is not nearly the only one nor the simplest one. How much one can overclock a microprocessor is really defined by how much clock speed tolerance the manufacturer sets for a given batch of dies. In other words, it's usually the economics more than the technology that determines how well a chip will overclock. Granted, some processes will facilitate higher clock speeds better than others, often as a virtue of thermal improvements (which is what I would guess is playing a large role in the Pentium D example case), but to take this and conclude that smaller transistors will make a microprocessor more overclockable is an oversimplification. -- mattb @ 2007-02-14T15:11Z
reference 2 invalid?
"Often, an overclocked system which passes stress tests experiences instabilities in other programs.[2]"
In the article cited there is nothing about those computers passing a stress test. In fact it says the computer was overclocked without their knowledge. reference here
Destiny or Fulfillment
The half of what concerns of this type of overclocking is just that, DO NOT OVER 5ghz, it will damage your system. This is really a tricky subject to begin with and not all so new after all. The time it takes to reconsider higher overclocking just lets you believe that you are overclocking 5ghz or in layman's terms going about doing things the wrong way such as visits to nearby convenience stores could help you see more of the world. As a matter of fact, the over clocker is using a slight misrepresentation of the over clockers and their system sublets. — Preceding unsigned comment added by 2605:a000:dfc0:6:dd05:46a5:f88:f542 (talk • contribs)