We’re another week into September and fall is almost here. With another week comes more news on the supercomputing horizon.
Tabor Communications names new Datanami editor
Datanami, a big data blog run by Tabor Communications, recently tapped Alex Woodie as the new managing editor for the publication. Nicole Hemsoth, senior director of editorial for Tabor Communications, hinted at the related nature of big data and high-performance computing when she pointed out that Woodie’s experience working with HPCWire should serve him well at Datanami.
“Alex is a talented journalist with a deep understanding of technical computing, data-intensive technologies, and their profound impact upon the IT industry,” said Hemsoth. “He’s done an outstanding job as the associate editor for HPCwire, and I’m confident as managing editor of Datanami, Alex will continue to build upon its reputation as the market leader in world-class and cutting-edge data journalism.”
As a blog, Datanami emphasizes not only covering direct big data trends, but also issues pertaining to technologies that support big data analytics.
Advanced benchmarking exploring CPU and GPU capabilities in HPC systems
Evaluating performance in the supercomputing sector can be an overwhelmingly challenging process because teraflop and petaflop measurements do not always reveal how a system will actually perform running different applications. Testing GPUs, coprocessors and commodity multi-core CPUs in actual use settings, with algorithms applied, can play a vital role in assessing and maximizing system performance. In a recent interview with HPCWire, Jorg Lotze, CTO and co-founder of Xcelerit, explained the results of running Kepler GPUs and Xeon Phi coprocessors side-by-side using variations of Monte Carlo algorithm.
Lotze told the news source that the effort revealed a great deal about processing techniques in supercomputing. Among the revelations are a few key finds – multi-core CPUs can play a key role in supercomputers and the ideal process architecture will vary substantially based on the specific use model. Lotze also explained that the results obtained running Monte Carlo algorithms, though aimed at financial services firms, could apply to many sectors.
“The other thing is it’s hard to tell in advance which [processor type] is going to be the best before you actually do testing because all these theoretical teraflops and memory bandwidth, doesn’t mean much for real applications,” Lotze told HPCWire. “In general, I can see this working for oil and gas, biochemistry, and all these fields where high-performance computing is in use.”
Cray announces software-based solution for shared memory functions
Cray, working with one of its partners, ScaleMP, has developed a software-based method to establish shared-memory systems that enable organizations to support memory-intensive HPC operations. HPCWire reported.
According to the news source, shared memory systems have largely filled a niche role in the supercomputing sector, with extremely specialized hardware configurations used by companies depending on the architecture. Cray’s efforts to create a software-based method to access shared memory functionality could play a vital role in helping the sector move forward because it gives organizations access to much more adaptable computing functionality.
Software-driven shared-memory capabilities have been designed to work within the Cray CS300 line of systems, the report explained.
DOE establishes cost benchmark for exascale computing
Many of the discussions surrounding exascale computing capabilities have centered around the fact that while it may be possible to reach the performance benchmark quickly, it could end up costing too much to do so. HPCWire reported that the U.S. Department of Energy has put a price tag on a possible exascale computing effort. The hypothetical project would aim to create an exascale device by 2022, but the research and development costs for the effort could reach between $1 billion and $1.4 billion.