We’ve reached a significant point in time where the interest in Artificial Intelligence (AI), machine learning and deep learning have gained huge amounts of traction – why? We are moving into an era where science fiction is now becoming fact and reality.
copyright by www.technologynetworks.com
AI and machine learning are not new concepts; Greek mythology is littered with references of giant automata such as Talos of Crete and the bronze robot of Hephaestus. However, the ‘modern AI’ idea of thinking machines that we all have come to understand was founded in 1956 at Dartmouth College . Since the 1950’s, numerous studies, programmes and projects into AI have been launched and funded to the tune of billions; it has also witnessed numerous hype cycles. But, it’s only been in the past 5–10 years that the prospect of AI becoming a reality, has really taken hold.
The rise of research computing
Research computing has been synonymous with High Performance Computing (HPC) for more than twenty years – the tool of choice for fields such as astrophysics. But, over the last two decades many other areas of scientific research that started needing computational power fell outside of traditional HPC systems. Bioinformatics for example, which is a field of study that aims to develop methods and software tools for understanding biological data, such as human genomes, needed greater computational horsepower, but had very different requirements to many existing HPC systems. However, the fastest way to a result was to cram onto these systems – the existing HPC just wasn’t fit for purpose.
That is where research computing was born. You couldn’t just have one system for all research types, you needed to diversify and provide a service or platform. From there, HPC systems were starting to be built to meet varied workload demands – such as the high memory nodes needed to handle and analyse large, complex biological data. That is where research computing was born. You couldn’t just have one system for all research types, you needed to diversify and provide a service or platform. From there, HPC systems were starting to be built to meet varied workload demands – such as the high memory nodes needed to handle and analyse large, complex biological data.
Even still, scientific researchers are very good at exhausting the available resources of a supercomputer – it’s rare to find an HPC system that ever sits idle, or has the capacity for more research projects. With the want, and need, for ever larger systems, Universities started to look towards cloud platforms to help with scientific research. That’s one of the reasons why cloud technologies such as OpenStack have started to gain a foothold within higher education. […]
read more – copyright by www.technologynetworks.com
We’ve reached a significant point in time where the interest in Artificial Intelligence (AI), machine learning and deep learning have gained huge amounts of traction – why? We are moving into an era where science fiction is now becoming fact and reality.
copyright by www.technologynetworks.com
AI and machine learning are not new concepts; Greek mythology is littered with references of giant automata such as Talos of Crete and the bronze robot of Hephaestus. However, the ‘modern AI’ idea of thinking machines that we all have come to understand was founded in 1956 at Dartmouth College . Since the 1950’s, numerous studies, programmes and projects into AI have been launched and funded to the tune of billions; it has also witnessed numerous hype cycles. But, it’s only been in the past 5–10 years that the prospect of AI becoming a reality, has really taken hold.
The rise of research computing
Research computing has been synonymous with High Performance Computing (HPC) for more than twenty years – the tool of choice for fields such as astrophysics. But, over the last two decades many other areas of scientific research that started needing computational power fell outside of traditional HPC systems. Bioinformatics for example, which is a field of study that aims to develop methods and software tools for understanding biological data, such as human genomes, needed greater computational horsepower, but had very different requirements to many existing HPC systems. However, the fastest way to a result was to cram onto these systems – the existing HPC just wasn’t fit for purpose.
That is where research computing was born. You couldn’t just have one system for all research types, you needed to diversify and provide a service or platform. From there, HPC systems were starting to be built to meet varied workload demands – such as the high memory nodes needed to handle and analyse large, complex biological data. That is where research computing was born. You couldn’t just have one system for all research types, you needed to diversify and provide a service or platform. From there, HPC systems were starting to be built to meet varied workload demands – such as the high memory nodes needed to handle and analyse large, complex biological data.
Even still, scientific researchers are very good at exhausting the available resources of a supercomputer – it’s rare to find an HPC system that ever sits idle, or has the capacity for more research projects. With the want, and need, for ever larger systems, Universities started to look towards cloud platforms to help with scientific research. That’s one of the reasons why cloud technologies such as OpenStack have started to gain a foothold within higher education. […]
read more – copyright by www.technologynetworks.com
Share this: