Anna Systems News
14 subscribers
1 photo
68 links
▫️A platform for High Performance Computing and Hypescale engineering projects
▫️Revolutionary changing HPC and Blockchain markets
📲Check out all lates news & upgrades of Anna Systems project
Download Telegram
🔬EXPLORING THE FRONTIERS OF CHEMISTRY WITH #HPC

"One of the techniques common in chemical research is the principle of serendipity. Essentially you try different combinations of approaches and materials until you find one that optimizes the reaction. You don’t have to understand how these components work – once you have the desired result, you’re done.

🔹The problem is the difficulty of building on that research to make a better product or to gain a greater fundamental understanding of the kinds of reactions and structures that are being involved. For example, photosynthesis is not well understood, making it hard to replicate in the lab these systems that work so efficiently in nature. Researchers can either fall back on serendipity or – with increasing frequency – enlist the help of supercomputers to run simulations of the components and see how they react.

Even with a supercomputer, the computations can be so numerous and complex that it is impossible to investigate them all. This, however, does not deter #TACC (Texas Advanced Computing Center), which continues to add new capabilities to its HPC systems. The upgrades include software as well as hardware".

📲More: https://www.nextplatform.com/2018/12/10/exploring-the-frontiers-of-chemistry-with-hpc/
​​⚙️A-Platform is Our solution for intensive quality growth of mass access to High Performance Computing services

🔹We make HPC of any kind moderate and accessible and for any Internet user
🔹We replace uniform and typical hashing by meaningful computations carried out on demand and extending the current Proof of Work to the broader Proof of Computation

A-Platform consists of 4 levels (see Figure below). In the long term, the network will unite industry, science, business, engineering, and design teams to solve prob- lems of different complexity levels in various fields of application on a global scale through the information technology infrastructure
🌧Injecting Deep Learning into Climate Models

"The work, which is being performed by researchers at the University of California, Irvine, the Ludwig Maximilian University of Munich and Columbia University, is focused on training neural networks to more accurately predict the how clouds are driving the Earth’s weather patterns, and by extrapolation, the longer-term climate effects. The resulting model, known as the “Cloud Brain,” was integrated into a traditional climate simulation with the hopes of improving its fidelity and performance."

🔎Read more & Source: https://www.top500.org/news/injecting-deep-learning-into-climate-models/

#deep_learning #climate_simulation #computer_simulation #climatechange #cloudbrain #cloud_brain
​​🔹NEW CHALLENGES REQUIRE NEW APPROACHES

We are moving away from the traditional HPC paradigm, where supercomputers or high-performance clusters were isolated from the main Internet ecosystem.

Developing the Tim O’Reilly Web 3.0 concept we offer access to High Performance Computing using today’s paradigms (Web interface, network API) and tomorrow’s (blockchain integration, the automatic orchestration of complex applications, economics incorporated into computing) Internet.
📌A 2019 Forecast for Data-Driven Business

1️⃣ #AI/ #MACHINE_LEARNING — AI continued to grow in popularity over the past year, becoming well-institutionalized within many large enterprises. As with analytics, the use of AI is increasingly being democratized through automated machine learning (AutoML). Several contributors to KD Nuggets’ review of AI and ML trends for 2019 suggested that AutoML would become more popular over the next year.

2️⃣#AUTOMATION — Robotic process automation, workflow, business rules, process mining, and some forms of AI all have the goal of automating human labor, or at least freeing up humans to do higher-level work.

3️⃣#BLOCKCHAIN — 2018 represented a year of major advancement for blockchain solutions as firms sought to ensure that data can be trusted, particularly when managing data in a distributed fashion. The need to ensure data trust received heightened attention in 2018 due to the adoption of the European General Data Protection Requirement (GDPR), resulting in greater focus on developing trusted frameworks for data sharing.

4️⃣#CLOUD_COMPUTING — The cloud continued its march toward domination in 2018. Two Deloitte surveys, for example, indicated that 90% or more of global executives are adopting, considering, or already using the cloud. Amazon Web Services, Microsoft Azure, and Google Cloud are all growing rapidly.

5️⃣#CYBERSECURITY — The serious cybersecurity events of 2017—WannaCry and NotPetya—led to many attempts to emulate them in 2018. As data-related activity by good guys grows, data breaches, hacks, and ransomware from bad guys seems to grow even faster. The latest McAfee Lab’s Threats Report suggests that malware exploiting software vulnerabilities grew by 151% in the second quarter of 2018.

6️⃣#DATAOPS – Data Operations (DataOps) is rapidly emerging as a discipline for organizations that continue to struggle with the management of data as a shared business asset. DataOps brings a set of data engineering principles which borrow from the DevOps software development movement. The intent is to deliver “rapid, comprehensive, and curated data” to business analysts and decision-makers.

7️⃣#ETHICS – Lastly, but by no means to be forgotten, data ethics emerged in 2018 as one of the single most important priorities for leading businesses, stung by security breaches and highly publicized misuses of customer information that represented breaches of public trust. 2018 was in some ways the year that data received a black eye. Now organizations must rebuild that trust. 2019 can be expected to be a year in which corporations step up efforts to ensure ethical data use and ethical data practices.

📲Source: https://www.forbes.com/sites/tomdavenport/2018/12/17/a-2019-forecast-for-data-driven-business-from-ai-to-ethics/?ss=enterpriseandcloud#253e3cbc2716
📢HPCWire: #HPC Reflections and (Mostly Hopeful) Predictions

1. Congratulations IBM and Brace for the March of New BIG Machines
2. Handicapping the Arm(s) Race as Processor Wars Flare Up
3. After the CERN AI ‘Breakthrough’, Scientific Computing Won’t be the Same
4. Quantum’s Haze…Are We There Yet? No!
5. Too Little Space for So Many Worthwhile Items

"The rise of #AI in earnest is the key feature of 2018. Let’s not quibble about exactly what constitutes AI – broadly, it encompasses #deep_learning (neural networks), #machin_learning, and a variety of data analytics. Whatever it is, it’s on the verge of transforming not only HPC but all of #computing."


📲Read full article : https://www.hpcwire.com/2018/12/19/hpc-reflections-and-mostly-hopeful-predictions/
​​🎄🎄🎄ANNA Systems' Year Results:

1️⃣We tested successfully our A-Platform with support for the provision of high-performance cloud computing services based on a distributed network;

2️⃣Opensource A-Platform-based application software packages were tested, including the world's best software suites like #OpenFOAM in #CFD and #Blender, #LuxRender, #POV-Ray and #Yafaray in rendering;

3️⃣MVP has been prepared for launch, providing CFD rendering and engineering calculations services;

4️⃣ANNA Systems, LLC team took part in the largest annual National Supercomputer Forum (#NSCF2018);

5️⃣ANNA Systems, LLC has entered into cooperation agreements with major equipment manufacturers such as IBM (USA), Mellanox Technologies (Israel), Lenovo (China). The subject of the agreement was the joint engineering and technical research in the field of building high-performance clusters, their configuration and optimization.

🔗About plans for 2019:

We expect that we will officially announce the launch of our public MVP in the beginning of 2019 based on a web platform for rendering services and processing static and dynamic 3D images.

Since 2019 we start working actively on the introduction of blockchain technology into the infrastructure of our platform. We will implement convenient cryptocurrency tools for mutual settlements among A-Platform participants and partners.

👏🏻👏🏻👏🏻Dear friends, may the coming year bring you happiness and new beginnings!
📢Starting 2019 with News in HPC Research:

▪️Deploying virtualized particle physics environments in an HPC cluster

"Particle physics experiments involving the Large Hadron Collider require a massive amount of computing power – a demand that is projected to increase with coming upgrades. In this paper, written by a team from the University of Freiburg, the authors discuss how the university has linked its NEMO HPC Cluster to the Worldwide LHC Computing Grid to augment the HPC resources available to the LHC".

🔗Read more: https://arxiv.org/pdf/1812.11044.pdf

▪️Managing rich metadata in HPC systems using a graph model

"While operating, HPC systems generate large amounts of metadata. Existing systems do a decent job of managing some of the metadata, but the data known as “rich” metadata – which record running processes and jobs and the relationships between them – are mostly left unattended. In this study, written by a team from UNC Charlotte, Texas Tech University and Argonne National Laboratory, the authors propose a graph model for managing rich metadata. They evaluate the graph model on synthetic and real HPC workloads to demonstrate its advantages and scalability."

🔗Read more: https://www.researchgate.net/publication/329768489_Managing_Rich_Metadata_in_High-Performance_Computing_Systems_Using_a_Graph_Mode

▪️Utilizing HPC for advanced power system studies

"As modern technologies like the smart grid and renewable energy become more widely available, interconnected power systems are more complex and uncertain than ever. This paper, written by a team from China, presents a probabilistic study for such systems based on an HPC method supported by the Computational Shared Facility at the University of Manchester. Using this method, the researchers gained new insights into probabilistic studies on power systems and into running Monte Carlo simulations on HPC systems".

🔗Read more: https://ieeexplore.ieee.org/abstract/document/8582482/authors#authors

▪️Using scalable deep text comprehension on HPC for cancer surveillance

"Deep learning has dramatically advanced the capabilities of bioinformatics applications. In this study, conducted by a team from Oak Ridge National Laboratory, University of Memphis, the Louisiana State University Health Sciences Center and the National Cancer Institute, researchers trained a neural network to extract information from a massive dataset of cancer pathology reports. The researchers then evaluated its scalability and accuracy."

🔗Read more: https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-018-2511-9

▪️Using HPC for aircraft assembly optimization

"Incredibly intricate manufacturing and airframe assembly processes have the ability to generate enormous datasets as they go through various tests and simulations with many important parameters. In this paper, written by a team from Peter the Great St. Petersburg Polytechnic University discusses the use of HPC to accelerate this task, describing a specialized approach that combines variation simulation and HPC to optimize the process".

🔗Read more: https://ieeexplore.ieee.org/abstract/document/8570136

▪️Characterizing CPU overheating behavior in HPC systems

"As supercomputers increase in size, abnormal events are more likely to occur. In this paper, a team of French researchers examined the problem of CPUs overheating in HPC systems, which can have a major impact on system efficiency. They analyze data collecting over a year on a Top500-ranked supercomputer demonstrating that overheating events are frequently associated with specific applications. They conclude by assessing the effect on system performance".

🔗Read more: https://ieeexplore.ieee.org/abstract/document/8564488

Source: https://www.hpcwire.com/2019/01/08/whats-new-in-hpc-research-particle-physics-power-systems-aircraft-assembly-more/
#HPC - the key to the Universe🚀

"The DiRAC@Durham piece of this distributed supercomputing facility is a memory-intensive system. The latest iteration of this system, called COSMA7, went into service in May 2018. It was delivered by Dell EMC and installed by Alces Flight, a U.K. company specializing in HPC software for scientists, engineers and researchers.

COSMA7 has all the right stuff for unlocking the secrets of the universe. It’s based on 4,116 Intel® Xeon® Skylake 5120 cores and Mellanox EDR networking. It incorporates 110 TB of RAM, 1.8 PB of data storage, and a fast check-pointing I/O system with peak performance of 185 GB/sec write and read.1

This kind of HPC horsepower helps scientists solve really big problems. It is enabling researchers to run simulations that produce hundreds of terabytes of data and to do calculations that are 10 times the size of those they have been able to do in the past, according to Dr. Lydia Heck, DiRAC Technical Manager at the Institute for Computational Cosmology at Durham University".

#With big supercomputers today, we are able to simulate the evolution of the universe, the formation of galaxies,” says Professor Carlos Frenk, director of the Institute for Computational Cosmology at Durham University. “We will soon be able to simulate the formation of planets, and perhaps one day the evolution of life.”

🔗Source: https://www.cio.com/article/3334878/analytics/unlocking-the-secrets-of-the-universe-with-hpc.html
🔬News&Updates in #HPC Research:

▪️Expanding an HPC cluster to support the demands of digital pathology

"A single clinical-grade image from a digital pathology scanner can range in size from hundreds of megabytes to five gigabytes. In this research, a team from Temple University set out to design a low-cost computing facility that could support the development of a repository for one million of these images. The researchers discuss the HPC cluster they expanded upon to accomplish this goal and evaluate its results".

📲Source: https://ieeexplore.ieee.org/abstract/document/8615614

▪️Evaluating trends in training and education provided by the SciNet HPC Consortium

"The SciNet HPC Consortium – a Canadian academic HPC center – has provided training and education in HPC and scientific computing over the last decade. In this paper, written by a team from the University of Toronto, the researchers evaluate how those efforts have changed over the last six years, evolving from basic, isolated training events into a broad range of workshops and courses that build to certificates. The researchers discuss overall trends and implications".

📲Source: https://arxiv.org/pdf/1901.05520.pdf

▪️Applying big data and HPC in drug discovery

"The SciNet HPC Consortium – a Canadian academic HPC center – has provided training and education in HPC and scientific computing over the last decade. In this paper, written by a team from the University of Toronto, the researchers evaluate how those efforts have changed over the last six years, evolving from basic, isolated training events into a broad range of workshops and courses that build to certificates. The researchers discuss overall trends and implications".

📲Source: https://www.hpcwire.com/2019/01/23/whats-new-in-hpc-research-power-pathology-drug-discovery-more/

▪️Using HPC for energy system optimization models

"Energy system optimization models help policymakers and planners understand how changes in the electric grid will affect the overall system. In this paper, written by a team from Ireland, Italy and Switzerland, the authors examine how these resource-intensive models – which can often take days to process a single inquiry – could be adapted to minimize solution time in an HPC environment. The authors discuss benefits and tradeoffs and outline a path forward".

📲Source: https://www.sciencedirect.com/science/article/pii/S0301421518308607

▪️Applying an HPC framework for dynamic power grid security assessments

"Another team of researchers also explored the use of HPC for electric grids. The team – a group from Pacific Northwest National Laboratory – examined dynamic security assessments, which help evaluate whether power grids can weather disturbances. Renewable energy and smart grid technologies have increased the uncertainty (and as a result, the computational burden) in these assessments. The authors discuss the advantages of applying a framework that links data from HPC to statistical analysis and visualization".

📲Source: https://ieeexplore.ieee.org/abstract/document/8598684

Have a nice reading!📑
​​👏🏻Great news!

We are very pleased to announce our partnership with IBM East Europe/Asia. Our two companies see a great potential in the development of GRID systems and share a passion for High Performance Computing. We believe that the activities and potential of our companies will make a significant contribution to the growth of HPC in the world.

Now we are conducting a research on the adaptation and future usage of hardware provided to us by IBM Company. It is also planned to make a socially oriented research on our platform both for subsequent scientific application and usage in corporate sphere in the future. As a result, we will get a specific case that will be aimed at solving socially significant problems, such as cancer research, solid waste disposal etc.
🔎New article on Medium from our IT Architect Vladimir Simakin is available!

"How you use cloud rendering technologies and not even know it".

This article will be interesting for those who would like to know how rendering (in the broad sense) changes our world, where it is most often used, and what technologies are behind this process.

📲Read full article: https://medium.com/@anna.systems/cloud-rendering-technologies-5667f4537274
🔬HPC (and now AI) use in life sciences by BioTeam

"Without #HPC writ large, modern life sciences research would quickly grind to a halt. It’s true most life sciences research computing is less focused on tightly-coupled, low-latency processing (traditional HPC) and more dependent on data analytics and managing (and sieving) massive datasets. But there is plenty of both types of compute and disentangling the two has become increasingly difficult. Sophisticated storage schemes have long been de rigueur and recently fast networking has become important (no surprise given lab instruments’ prodigious output). Lastly, striding into this shifting environment is AI – deep learning and machine learning – whose deafening hype is only exceeded by its transformative potential."


Part One of the discussion examines core infrastructure issues around processor choices, heterogeneous architecture, network bottlenecks (and solutions), and storage technology.

🔗Full source: https://www.hpcwire.com/2019/02/21/hpc-in-life-sciences-part-1-cpu-choices-rise-of-data-lakes-networking-challenges-and-more/
​​Used primarily to fuel computer-aided engineering (CAE) and computer-aided design (CAD) systems that simulate a product’s various components, HPC enables designers to quickly adapt designs and better understand how a product will perform in a real-world environment.

HPC-powered CAE/CAD workloads can help improve the product design and development process in a number of key ways:

1️⃣ Faster turnaround times:

High-powered computing tools can generate complex simulations faster and in greater fidelity, so engineers can produce more design variations in less time. This limits disruption to the creative process and helps to complete designs more quickly.

2️⃣ Better simulations of real world behavior:

Advances in HPC and graphics processing unit technologies now allow designers to leverage a more reality-based design approach that mimics the real-world behavior of light, fluids, and various materials.

3️⃣ Enhanced collaboration among design teams:

Virtual GPU and virtual desktop technology provide the ability to keep design data safe and secure in the data center, while delivering an immersive graphics experience to designers regardless of location. This enhances collaboration among dispersed design teams.
​​🔬Computational Fluid Dynamics is a software application that helps end-users analyze the flow, turbulence, and pressure distribution of liquids and gases, and their interaction with structures. It also helps in predicting fluid flow, mass transfer and related phenomena.

⚫️ Extensive simulation

CFD allows great control over the physical process, and provides the ability to isolate specific phenomena for study.

⚫️ Complete information

CFD allows the designer to examine any location in the region of interest, and interpret its performance through a set of thermal and flow parameters.

⚫️ Less Time

High-performance computing involves several techniques to make simulations efficient and fast, such as distributed memory parallelism, shared memory parallelism, vectorization, memory access optimizations, etc.
​​📢📢📢GREAT NEWS!

Dear friends, we are happy to present to you our new service - #ANARender!

AnaRender – is an online-rendering system, which allows you to do the rendering of your video using such popular software packages as #Blender, #Luxrender and #YafaRay plugins and #POV-Ray. It is the perfect solution for those who knows how tedious and inconvenient the waiting for the calculation of the three-dimensional model and its transformation into a video can be!

We replenish consistently the database of video processing software available to our customers. If you need to use other software – this question can be discussed with the specialists of our technical support service and sales department.

AnaRender service operates using A-Platform - high-performance distributed computing platform.

📲You can look through conditions of usage and start rendering on our website:
https://anarender.io

If you have any questions about ANARender feel free to contact us!

📷Photos from TestUp & Demo Day are provided by Russian Hackers⬇️
🔎AI & HPC in 2019: the biggest change agent for HPC in its history?

Top ten ways that #AI will most impact #HPC in 2019:

1) Melding of applications: Rather than replacing after “rethinking” – we “blend” with the best of both worlds – expanding workload diversity and seeing all manner of workload convergence

2) New investments: Inferencing

3) Melding of people: User diversity and added excitement about HPC

4) Hardware: Interactive capabilities, and focus on powering libraries and frameworks

5) Compute for the Masses: Cloud

6) Size Matters: Big Data

7) Portability and security: Virtualization and containers

8) Freedom to think differently: Replace legacy code by using the opportunity (and peril) to rethink approaches

9) Languages: Higher level programming

10) Tensors: Lingua franca for AI computations

📲Read full source: https://www.hpcwire.com/2019/03/26/top-ten-ways-ai-affects-hpc-in-2019/