Meta unveiled its new language processing model for artificial intelligence (AI) research earlier this month. Responding to the name of Open Pretrained Transformer (OPT-175B), it will be entirely open source and can be used for non-commercial purposes. These templates can be used to automate chatbots, translate texts, or even write product sheets.
Meta was inspired by the OpenAI model
The model shares the same capabilities as the one created by OpenAI, a company co-founded by Elon Musk. Like GPT-3, it has 175 billion parameters and is based on machine learning.
The Dark Side of Cryptocurrency Trading: Insomnia, Addiction, Depression
If OPT-175B was thought to look so much like its big brother, it is intentional. According to Joelle Pineau, co-CEO of Facebook AI Research, the model was designed so that the level of accuracy of language tasks and its toxicity matches that of GPT-3.
The idea behind it? Make its code open source in order to allow researchers from all walks of life to contribute to its development for free. Indeed, OpenAI had made access to its API paying. Because of this, only the wealthiest labs were able to undertake research on his language processing model.
” We believe that the entire AI community – academic researchers, civil society, policymakers, and industry – must be able to work together to develop clear guidelines around artificial intelligence and models of large and responsible languages “explained Meta in his press release.
Make technology more transparent
Ensuring that technology is increasingly transparent has always been Joelle Pineau’s workhorse. ” This commitment to open science is why I am [chez Meta]. I won’t be there in other conditions “, said the researcher in an interview with MIT Technology Review.
Meta is not the company most willing to share its secrets. It has revealed very little of the operation of its algorithms and prefers rather to hide its errors. Yet Joelle Pineau managed to make the OPT-175B open source and available for non-commercial purposes.
The group also released the full model code and a logbook to document the AI training process. This already contains 3 months of research, from October 2021 to January 2022, and lists all the bugs, crashes and reboots encountered.
Natural language processing models have become the next holy grail of artificial intelligence. However, these contain deep flaws, including the transmission of false information, the use of racist and sexist biases and toxic language. These problems are related to the texts used by the model to train and develop.
GPT-3 has paid the price many times. In October 2020, the Nabla company was testing this model in the medical field when it advised a fake patient to end his life. Making the code of the language processing models freely accessible in its entirety would therefore make it possible to correct these complications more effectively.