Metaverse, supercomputer… Terms that are very publicized at the moment but which can nevertheless seem a little obscure when you are not immersed in these fields… Could you explain to us what a metaverse is? A supercomputer?
Is qualified as
metato an immersive and persistent virtual parallel world whose time flows the same as in real life. It is a fictional world in which a person evolves through a means of virtual reality, thanks to a helmet for example. According to the company Meta, the metaverse should come to replace the current Internet.
supercomputer is a computer designed to achieve the highest possible performance. To do this, it integrates several tens of thousands of processors, making it possible to carry out a large number of precise and complex calculations. Given the centralized computing power within a single machine, all the components of the supercomputer are kept at a constant temperature by a ventilation system, doubled by a liquid cooling process. Supercomputers provide essential support to scientists thanks to their computing capabilities, thus making it possible to carry out faithful and predictive simulations in many fields of science and industry such as astrophysics, meteorology, aeronautics or nuclear power. . They make it possible to limit real tests to what is strictly necessary by being as close as possible to physics to model behavior and save time, both in terms of efficiency and safety.
What is a supercomputer used for to operate a metaverse?
Supercomputers have several functions but the main one remains the
carrying out modeling calculations of physical, chemical or biological systems. However, that of Meta is a little different: it is not turned towards the modeling of reality but towards artificial intelligence (AI), keystone of the parallel world proposed by the metaverse as envisaged by the company Meta.
Maintaining the metaverse requires all the power of a supercomputer for two reasons:
- the environment of the parallel universe must be able to evolve permanently and in real time,
- the machine must be able to interpret and understand (also in real time) the information it will capture (recognition of images, texts, words).
In addition to the supercomputer analysis capabilities needed by the metaverse, Facebook’s parent company had to make an effort on storage technologies.
Why has Meta’s announcement caused so much ink to flow?
Despite the media buzz generated by Meta’s announcement,
its supercomputer should not yet revolutionize the field of
high performance computing. First, the architecture of this supercomputer is fairly standard, apart from the impressive number of graphics processors integrated into the machine, a key component at the heart of the success of data processing linked to AI. Although the future Meta supercomputer should rank among the first in terms of power, it cannot be considered a technological revolution.
The novelty lies in the fact that this machine will be essentially focused on a single application: the metaverse and that this requires mass data processing. The majority of other supercomputers in the world are intended to support research and industry. The latter thus serve vast and broader application portfolios than what Meta will do.
It was ultimately the fact that a GAFA (acronym for Google, Apple, Facebook and Amazon), a big name in the world of entertainment, made an announcement of this type that caused a sensation: it reflects a real awareness a need for computing power and no longer for the sole storage capacity of their data centers.
Meta has announced that the power of its supercomputer will be 5 exaflops. In comparison with Chinese, Japanese, American supercomputers, will it really be the most powerful in the world once developed?
To be able to compare and measure the power of supercomputers, the same metric must be used, namely 16, 32 or 64 bits depending on the calculation precision of the algorithms. Dedicated AI supercomputers, including Meta’s, only run on 32 bits, unlike modeling supercomputers which run on 64 bits.
If we take the ranking of the most powerful supercomputers in the world, the TOP 500, Fugaku, the Japanese supercomputer with 442 petaflops is ranked n°1.
This ranking is based on 64-bit performance because it is the precision necessary for the realization of numerical simulations. This 64-bit precision is not necessary in the field of AI where most tools only use single precision (32 bits) or half precision (16 bits).
So to compare the power of the Fugaku machine with the Meta supercomputer, you have to divide the power of the latter at least by 2. With equal precision, Meta’s computer would no longer have a power of 5 exaflops as announced by Meta, but rather 1 to 2 exaflop(s). If the Meta supercomputer manages to operate at full power, that is to say during 2022, a priori, it will be the most powerful in the world if we refer to the current ranking (TOP 500).
However, you should know that this ranking is updated twice a year, in June and November, and it is therefore difficult to say whether during these next two updates, there will not be a more powerful machine. On the other hand, we can reasonably say that Meta’s supercomputer will be among the 5 most powerful machines in the world.
Why this power and not more? Are there any constraints preventing you from getting even more power?
The main constraint concerns
the energy consumption of these architectures and the performance/watt ratio, because the means to increase the power of supercomputers already exist. It is important to have powerful machines with controlled consumption while being able to meet the needs of the applications that use them. For example, Meta’s is perfect for AI algorithms, but in the realm of classical simulation, there are few applications that can run efficiently on this type of architecture without significant effort and investment in software. adaptation of these same applications.
What energy consumption are we talking about?
It is difficult to give a precise figure, but we can estimate that the electricity consumption is around 20 MW. The main problem concerns not only the electrical consumption of the machine, but also the evacuation of the heat produced by the supercomputer. High-performance computing unfortunately cannot be frugal due to the pharaonic number of precision calculations to be performed, and even if the graphics cards used by Meta are less energy-intensive than conventional processors for the power delivered, the electrical consumption of such a system remains very important.
Much progress has been made on the energy optimization of these machines, such as the cooling circuits of supercomputers: the water intended to cool the components of the machine is water at 35°C. and rotates in a closed circuit in order to be recovered and reinjected into the machine after being cooled. It is sometimes coupled with heating circuits for the buildings located around the machine.
What contribution to our societies and to the population in general?
Meta’s supercomputer will bring a new form of entertainment. The other supercomputers, those of the academic and industrial worlds, promise more precise modeling and simulations of physical phenomena, both in the field of weather and that of life sciences or health. Indeed, supercomputers model the future climate and compare climate models. They also analyze and process data around Covid and the Spike protein in particular. They thus make it possible to test candidate drugs and vaccines against Covid more quickly.
They also provide a service to the automotive industry: crash test simulation, testing of materials and the behavior of cars, etc. The design is done virtually, this is also the case in aeronautics to limit flight tests. .
Will this supercomputer help AI researchers to develop the metaverse? If so, how ?
The Meta machine will only be mobilized by Facebook teams, researchers and industrialists will not have access to it. On the other hand, that of the CNRS, based in Orsay, Jean Zay, is already widely open to the scientific community specializing in AI.
What is the future of supercomputers?
high performance computing remains a niche market but is essential for solving major scientific and technical problems. The fact that the Internet giants and now the metaverse is demanding high performance computing to develop can change the situation and will certainly allow the development of new markets.
Regarding supercomputers in the world, the United States will release their first exascale machine in 2022. In Europe, it will be for 2024, and we are almost sure that China has one or even two even if the Chinese have decided not to display them in the TOP 500.
The recent interest of major industrial and digital players such as Meta or Microsoft will, in any case, make it possible to move the lines in the field.
What are the CEA’s contributions on the subject?
The CEA is a leading European player in the field of HPC, both from the point of view of the operation of world-class supercomputers with the CEA’s Very Large Computing Center (TGCC), and from the technological and co-design with the industrialist ATOS or in the field of high-performance digital applications for science and industry. The CEA is thus at the origin of major collaborations in the field of high-performance computing on a European and international scale with Germany and Japan.
At the CEA, supercomputers serve us in various fields: