Ethical dilemmas generated by new technologies, such as Artificial Intelligence. But the solution is not a regulation from the State

New technologies raise ethical issues and dilemmas about their impact. Here is an interesting article by Ryan Kuhrana, from the Institute for Advanced Prosperity, in relation to Artificial Intelligence: Work employment

“The research community is better prepared to shape what is being developed and in what capacity it can be used by a variety of actors.

The ethics of artificial intelligence is best left to researchers

On Valentine’s Day 2019, OpenAI, a leading non-profit organization dedicated to artificial intelligence research, published the results of its latest work, GPT-2, which promised a significant breakthrough in AI language generation. The model of artificial intelligence, created from an approach called “Transformer”, that Google was a pioneer only a few years before, was able to generate coherent and extensive answers to the questions. One of these responses, in which the model generated a false news about the theft in a store of Miley Cyrus, revealed a disconcerting application given the difficult political climate. Fearing that their technology was used to spread false news, OpenAI stated that they would not disclose the data set or the trained model.

Their decision led to the mockery of many in the Amnesty International research community for their lack of respect for the rules of inquiry, and some claimed that suspending the investigation was a means to generate publicity in the media. And there were exaggerations, with prophecies of “AI doom” from the conventional press, which criticized technology for its threat to democracy. However, neither the disdainful nor the sensationalist attitude really captures the importance of OpenAI’s decision. Since policy makers move too slowly to properly regulate new technologies, the responsibility for this type of ethical decision must rest with researchers.

While impressive, GPT-2 is not a radical departure from normal and expected trends in the subfield of AI called natural language processing (NLP). Already, the systems of Alibaba and Stanford have surpassed the previous reference points of GLUE, one of the standard bearers of the PNL, since the launch of GPT-2. Its innovation arose mainly from the size and diversity of the data set in which the IA was formed: a data set of 45 million web pages called WebText extracted from the links on Reddit. This size and diversity allowed the trained model to perform well in a variety of contexts, such as reading comprehension, translation, and text summarization. Most of the previous developments, at least in the English language, have been developed for specialized tasks.

However, restricting access to the data set and the trained model will not prevent a similar advance from developing independently because this is a normal development. The cost will increase slightly, since the work requires many resources both in terms of time and computation, which is a slight barrier but not insurmountable.

The full text at: https://www.libertarianism.org/building-tomorrow/ethics-artificial-intelligence-best-left-researchers