Our original article on trends in peak-output efficiency of computing over time covered the 1946 to 2009 period, and it showed a clear trend, with peak-output efficiency doubling every 1.57 years.
Koomey, Jonathan G., Stephen Berard, Marla Sanchez, and Henry Wong. 2011. "Implications of Historical Trends in The Electrical Efficiency of Computing." IEEE Annals of the History of Computing. vol. 33, no. 3. July-September. pp. 46-54. [ http://doi.ieeecomputersociety.org/10.1109/MAHC.2010.28]
At the time, I didn't think to analyze the data post-2000 to see the effects of the end of Dennard scaling in the early 2000s, but we went back and did that for our 2016 article for Electronic Design.
Koomey, Jonathan, and Samuel Naffziger. 2016. "Energy efficiency of computing: What's next?" In Electronic Design. November 28. [ http://electronicdesign.com/microprocessors/energy-efficiency-computing-what-s-next]
The 2016 analysis showed that post-2000, peak-output efficiency had slowed to doubling every 2.6 years. This trend showed up in the data from the 2011 analysis, and was confirmed by later data from @AMD documented in that article, which also showed doubling every 2.6 yrs.
The 2016 analysis also showed that other metrics of efficiency can (for at least a time) improve more quickly than the post-Dennard scaling constrained rates of change for peak-output efficiency.
We defined a metric we called "typical-use efficiency" that more accurately characterizes efficiency improvements for personal computers.
This efficiency metric is dominated by standby and sleep modes, and it doubled every 1.5 years at the same time as peak-output efficiency was doubling every 2.6 years.
These results point to other ways to improve efficiency beyond the methods that until recently have dominated hardware efficiency improvements. I wrote more about how to estimate typical-use efficiency here:
Koomey, J. 2015. "A primer on the energy efficiency of computing." In Physics of Sustainable Energy III: Using Energy Efficiently and Producing it Renewably (Conference Held March 8-9, 2014 in Berkeley, CA). American Institute of Physics (AIP Proceedings). pp. 82-89.
If you want to dive in deeper, this recent article in Science talks about the potential for better software design to continue performance and efficiency improvements in the face of recent physical constraints.
Our 2013 "Smart Everything" article gives relevant background to this space with lots of examples and explanation.
There's also a big literature on what's called "codesign" of computer systems, where hardware and software are optimized together to increase performance, oftentimes customizing the technology for specific workloads.
Most codesign work is for high-performance computing, but that will likely change in the years ahead. Shalf, J., D. Quinlan, and C. Janssen. 2011. "Rethinking Hardware-Software Codesign for Exascale Systems." Computer. vol. 44, no. 11. pp. 22-30. [ http://doi.ieeecomputersociety.org/10.1109/MC.2011.300]
Finally, for context (because everyone wonders), all data centers in the world use less than 1% of the world's electricity, and that total didn't grow much from 2010 to 2018, even as computing output increased 5.5 fold. It's also a very high value use of that electricity.
If you want PDF copies of any of these articles or have any questions, please email me at [email protected]. You can also keep up on my work at http://www.koomey.com 
You can follow @jgkoomey.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.