Cyber threats represent an enormously challenging and growing security risk around the world. Moreover, recognizing cyber threats is similar to finding a needle in a haystack. Usually, cyber threat analysts sample a small volume of data available to them due to processing constraints. However, this is expected to change soon as supercomputers have joined the fight against cyber threats.
Researchers at Lincoln laboratory Supercomputing Center (LLSC) are making it possible. In a recent study, the team of scientists processed 96 hours of 1-gigabit network link internet raw traffic data. The team condensed the data into a query-ready format for further investigation. The scientists used the LLS center to gain quick results.
The researchers stored their data on the MIT SuperCloud. The LLSC research facility located in located in Holyoke, Massachusetts allows anyone with an account to study these findings.
Cyber threats researchers usually deal with massive amounts of data. For example, internet traffic generated over a couple of days can be difficult to process for human analysis instantly. Over a hundred laptops deployed at once do not deliver results. Hence, cyber threat experts often fall back on data sampling methods.
Deployment of Supercomputers Becoming Essential
According to Vijay Gadepally, a senior researcher at the LLSC, current sampling methods may not be adequate for detecting cyber threats. Vijay says cyber threats represent anomalous behavior and often an unlikely and rare event. Hence, data sampling almost makes it impossible to detect cyber threats.
Supercomputers level the playing field by sampling offering a better method. Supercomputers grants cyber experts the essential tools to analyze complete sets of data at once to detect subtle trends.
As businesses, governments, and individuals grapple with the growing menace of cyber threats, supercomputers can represent a ray of hope for cyber experts.